Test Report: Docker_Linux 14070

                    
                      00cd5342a55ca888d8306eb2334aa46bcc205630:2022-05-09:23841
                    
                

Test fail (42/288)

Order failed test Duration
19 TestDownloadOnly/v1.24.1-rc.0/cached-images 0
33 TestAddons/parallel/MetricsServer 312.23
43 TestForceSystemdFlag 73.35
44 TestForceSystemdEnv 73.19
217 TestKubernetesUpgrade 71.94
263 TestStartStop/group/no-preload/serial/DeployApp 0.27
264 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.22
265 TestStartStop/group/no-preload/serial/Stop 0.18
266 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
267 TestStartStop/group/no-preload/serial/SecondStart 0.25
268 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.12
269 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.16
270 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.19
271 TestStartStop/group/no-preload/serial/Pause 0.3
274 TestStartStop/group/embed-certs/serial/DeployApp 0.31
275 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.29
276 TestStartStop/group/embed-certs/serial/Stop 0.2
277 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
278 TestStartStop/group/embed-certs/serial/SecondStart 0.3
281 TestStartStop/group/default-k8s-different-port/serial/DeployApp 0.29
282 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.13
283 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.17
284 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.26
285 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
286 TestStartStop/group/default-k8s-different-port/serial/Stop 0.21
287 TestStartStop/group/embed-certs/serial/Pause 0.34
288 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.28
289 TestStartStop/group/default-k8s-different-port/serial/SecondStart 0.26
290 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 0.12
291 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 0.17
292 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.19
293 TestStartStop/group/default-k8s-different-port/serial/Pause 0.34
297 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.21
298 TestStartStop/group/newest-cni/serial/Stop 0.2
299 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
300 TestStartStop/group/newest-cni/serial/SecondStart 0.27
303 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
304 TestNetworkPlugins/group/auto/Start 216.09
305 TestStartStop/group/newest-cni/serial/Pause 0.31
336 TestNetworkPlugins/group/kubenet/HairPin 58.3
338 TestNetworkPlugins/group/calico/Start 521.36
350 TestNetworkPlugins/group/false/DNS 366.93
x
+
TestDownloadOnly/v1.24.1-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.1-rc.0/cached-images
aaa_download_only_test.go:133: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.24.1-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.24.1-rc.0: no such file or directory
aaa_download_only_test.go:133: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.24.1-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.24.1-rc.0: no such file or directory
aaa_download_only_test.go:133: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.24.1-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.24.1-rc.0: no such file or directory
aaa_download_only_test.go:133: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.24.1-rc.0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.24.1-rc.0: no such file or directory
aaa_download_only_test.go:133: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/pause_3.7" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/pause_3.7: no such file or directory
aaa_download_only_test.go:133: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/etcd_3.5.3-0" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/etcd_3.5.3-0: no such file or directory
aaa_download_only_test.go:133: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.6: no such file or directory
aaa_download_only_test.go:133: expected image file exist at "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" but got error: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory
--- FAIL: TestDownloadOnly/v1.24.1-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (312.23s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 8.727842ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-5ccbd9cf46-9zwvx" [4e835545-32d0-4e32-abd8-7358d6da7010] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008658837s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (56.029585ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (59.842062ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (52.118854ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (50.30321ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (50.893765ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (47.053275ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (48.833349ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (48.273066ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (48.486869ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (49.033743ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220509082454-6723 top pods -n kube-system
addons_test.go:365: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 top pods -n kube-system: exit status 1 (48.090952ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:379: failed checking metric server: exit status 1
addons_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-20220509082454-6723
helpers_test.go:235: (dbg) docker inspect addons-20220509082454-6723:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "547be1dee1137f29a2f7b49182e52539b5483115aa1ef1aa721ca15bf6369620",
	        "Created": "2022-05-09T08:25:11.083729274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-09T08:25:11.482239511Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/547be1dee1137f29a2f7b49182e52539b5483115aa1ef1aa721ca15bf6369620/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/547be1dee1137f29a2f7b49182e52539b5483115aa1ef1aa721ca15bf6369620/hostname",
	        "HostsPath": "/var/lib/docker/containers/547be1dee1137f29a2f7b49182e52539b5483115aa1ef1aa721ca15bf6369620/hosts",
	        "LogPath": "/var/lib/docker/containers/547be1dee1137f29a2f7b49182e52539b5483115aa1ef1aa721ca15bf6369620/547be1dee1137f29a2f7b49182e52539b5483115aa1ef1aa721ca15bf6369620-json.log",
	        "Name": "/addons-20220509082454-6723",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20220509082454-6723:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20220509082454-6723",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9a6886679b366f24fd7eadb258a48da39c07c6bbb40a5eaf5fab66ec7c4bdb6f-init/diff:/var/lib/docker/overlay2/beaaca4c58fe6ff4bdb88567c3d78ab7a23955eafaa5637df03ee2e0482d2aa6/diff:/var/lib/docker/overlay2/7c16b810bbfa3a2abff75078fa37b4cba0b2f101ff43d49beaabc3fd2602b1c9/diff:/var/lib/docker/overlay2/60f04c0e4baa8ad1c02ae5e34e6f505db0d740e2d7dc0833b2ff3b8037c1a9b6/diff:/var/lib/docker/overlay2/a12543300ae4ff803b2f0493a60a04a921312ec5a7b6ed493e66acadf998daef/diff:/var/lib/docker/overlay2/2d68f658a64cd8b7255bce93547db5d1b20b119ef6da8b9ce06614134661f235/diff:/var/lib/docker/overlay2/0f968f210c1565f6e8c4e444c650502c06a120d8767c9fefb7b6b2a09f4af83b/diff:/var/lib/docker/overlay2/987a67893acdccd514357a50db6c11680ae1899ec07a36085367a241ba1f0545/diff:/var/lib/docker/overlay2/d5446d1adc007f4390aa6173ef257b0bf52a9c9cc6533f16dd6c577fde5334b3/diff:/var/lib/docker/overlay2/265dd0cde77d578f6412ad48596c75c3548b5e077059c0fa835e4f22775fab83/diff:/var/lib/docker/overlay2/e6b98b
08dd64e06639e2a321169f84a89e8d21cea02d673819156eeb7a4747c3/diff:/var/lib/docker/overlay2/acf7117613a840e97b7a1101bfdce3a154a335ee0ffcaf74bd7f1b27b00cdaab/diff:/var/lib/docker/overlay2/9e88fa64aa800dac6e01eddc9cfd525a0dcd906c1adf98de066be87f87a6b52c/diff:/var/lib/docker/overlay2/6c539edead197cd449ca81fd34e3c534a6ca25446a52141d0ae4484aaac05482/diff:/var/lib/docker/overlay2/866719c52f41a692b94f42071d314de55b290912f946f4ec5f3785a7a5ae40df/diff:/var/lib/docker/overlay2/1c488a9b3bad141652873f4085ce1a466e1f1e8ccbd086d03a492b45179a6064/diff:/var/lib/docker/overlay2/bce5b942da09326e5a5c8595077544ced08031b1cd9b22ff8bff0a4458540139/diff:/var/lib/docker/overlay2/ffe34c3764a4eeac791f4672d47389ec3f399f716ab6e5531d7c5e587f3ace00/diff:/var/lib/docker/overlay2/a377e0779d03467b09a26898ae35b12f325d62984304817f49c693b2146c08f4/diff:/var/lib/docker/overlay2/6092380f07b29488cf0a30cb486638d86eaa8e00ff356a7985c6ac6f2fae5c1d/diff:/var/lib/docker/overlay2/1bc014cc0cf0a91c61f131c06b0194709f853dec23defe9233cbd9cc40030c28/diff:/var/lib/d
ocker/overlay2/0c81c4db7384c48318a800165af7f811348a8081efdd9dbd912e05e55c9eb4e0/diff:/var/lib/docker/overlay2/72b0c515d90bc71e27b766a9be89e315777a5bbe643d8fc508a9ae12557a58ce/diff:/var/lib/docker/overlay2/a4d193bf8c377d4cf1357b9261d3c54d995f17bda3db5abee3ef5caec001d75e/diff:/var/lib/docker/overlay2/763c6d291074b0842ecdeec1f3842fd6a0af0cb86839c82bc38cec0f40d095ca/diff:/var/lib/docker/overlay2/e4156eb2a94ac7136eb674fb8c22ea7f6dff50cd81e4857119d81112dc5ad99d/diff:/var/lib/docker/overlay2/be7effa3bb906b8c48aefab3cc72e657931bccd42c35b03bb52c679c37c70d25/diff:/var/lib/docker/overlay2/ccf8c1a68774cc6129c43490c675e6ba0ff0c88284ad899c9efb4b7492e92a06/diff:/var/lib/docker/overlay2/55f74cddb5c8f2da1131ddd67f7f2a20b8c2be8719b71047d5185fdb4722627d/diff:/var/lib/docker/overlay2/f6c38e83545e9de87d03b1e8e9b0239079acf47784afc11291b2172a11f8296d/diff:/var/lib/docker/overlay2/24604ce83180fcb6b9e1adbaf37db6776e4949ca5e1f4f9050b8fb8dc87d7591/diff:/var/lib/docker/overlay2/b03b1952bea32e53ce88b0d92d09f78ca76ceca0a146b088628be224813
ae87c/diff:/var/lib/docker/overlay2/d7ee02a5ba355c246c3107f286c4130457358aa56e0d8feb3850d953812fe76d/diff:/var/lib/docker/overlay2/1223ae0e5105ae89c78524490f0ccc5fe6dfa373e175fa70474b06c62aad91c7/diff:/var/lib/docker/overlay2/c3209d71eed94ed66b11ba3a7573c347d5c09a5a4fedb00418e52571abb2b5f9/diff:/var/lib/docker/overlay2/d6d32632e36023dec15e643fd41f77168442327cecdead87a47e70683cb0c660/diff:/var/lib/docker/overlay2/fd413b21f1f34027c98a1b4106f7e7db83e910860edd53ef838ac699659cd451/diff:/var/lib/docker/overlay2/9768e3244c157a7a12b8ec31507c1abc0fc806a9f325099262cfeb41e23a5fe1/diff:/var/lib/docker/overlay2/f65dda9d5b79903f5e68f6b9d4a59214d62f304da77faa7e148b90b402d98ca4/diff:/var/lib/docker/overlay2/b663c2dab6b5df7b606daa62e1df5e57f4a2503c2a9a19211f47359fd685dccf/diff:/var/lib/docker/overlay2/bbd620a7e494844db80a2bcd2fd6f170080c5273f1c33a576501214dd4475464/diff:/var/lib/docker/overlay2/23de36667695cec8ea0f4c98fc580213d356d39a3eba5aaab8b6ebeeb2b71596/diff:/var/lib/docker/overlay2/bafb3e8d91e83dcf824b6d5b3f56a67c483c42
723724b009a5aedf578351154f/diff:/var/lib/docker/overlay2/2798e8ce2f51dde257a9cf2dd800492f888fd02515c5f1de4133cc787ee12928/diff:/var/lib/docker/overlay2/e69a88f2dffdd2c7f72c45eaf2e3cd1a8772e0a2af22d6f34d2695bbda62b6e9/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a6886679b366f24fd7eadb258a48da39c07c6bbb40a5eaf5fab66ec7c4bdb6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a6886679b366f24fd7eadb258a48da39c07c6bbb40a5eaf5fab66ec7c4bdb6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a6886679b366f24fd7eadb258a48da39c07c6bbb40a5eaf5fab66ec7c4bdb6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20220509082454-6723",
	                "Source": "/var/lib/docker/volumes/addons-20220509082454-6723/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20220509082454-6723",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20220509082454-6723",
	                "name.minikube.sigs.k8s.io": "addons-20220509082454-6723",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09e6b2c8dfc86ebf42111cdf588d3b3df612469eca590474d43860610db3612b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49157"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49156"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49155"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49154"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/09e6b2c8dfc8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20220509082454-6723": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "547be1dee113",
	                        "addons-20220509082454-6723"
	                    ],
	                    "NetworkID": "f822b2fb4ade7331a14ae0bc35ed1fe3bb560a94a7d2cf37dd621626e5f8a551",
	                    "EndpointID": "24d51883d3761769483e056396a7cc514fb2df75d39f5451ca4f64c0dd624867",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20220509082454-6723 -n addons-20220509082454-6723
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-20220509082454-6723 logs -n 25: (1.196474681s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|-------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |               Profile               |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|-------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                               | download-only-20220509082434-6723   | jenkins | v1.25.2 | 09 May 22 08:24 UTC | 09 May 22 08:24 UTC |
	| delete  | -p                                  | download-only-20220509082434-6723   | jenkins | v1.25.2 | 09 May 22 08:24 UTC | 09 May 22 08:24 UTC |
	|         | download-only-20220509082434-6723   |                                     |         |         |                     |                     |
	| delete  | -p                                  | download-only-20220509082434-6723   | jenkins | v1.25.2 | 09 May 22 08:24 UTC | 09 May 22 08:24 UTC |
	|         | download-only-20220509082434-6723   |                                     |         |         |                     |                     |
	| delete  | -p                                  | download-docker-20220509082450-6723 | jenkins | v1.25.2 | 09 May 22 08:24 UTC | 09 May 22 08:24 UTC |
	|         | download-docker-20220509082450-6723 |                                     |         |         |                     |                     |
	| delete  | -p                                  | binary-mirror-20220509082453-6723   | jenkins | v1.25.2 | 09 May 22 08:24 UTC | 09 May 22 08:24 UTC |
	|         | binary-mirror-20220509082453-6723   |                                     |         |         |                     |                     |
	| start   | -p addons-20220509082454-6723       | addons-20220509082454-6723          | jenkins | v1.25.2 | 09 May 22 08:24 UTC | 09 May 22 08:26 UTC |
	|         | --wait=true --memory=4000           |                                     |         |         |                     |                     |
	|         | --alsologtostderr                   |                                     |         |         |                     |                     |
	|         | --addons=registry                   |                                     |         |         |                     |                     |
	|         | --addons=metrics-server             |                                     |         |         |                     |                     |
	|         | --addons=volumesnapshots            |                                     |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver        |                                     |         |         |                     |                     |
	|         | --addons=gcp-auth                   |                                     |         |         |                     |                     |
	|         | --driver=docker                     |                                     |         |         |                     |                     |
	|         | --container-runtime=docker          |                                     |         |         |                     |                     |
	|         | --addons=ingress                    |                                     |         |         |                     |                     |
	|         | --addons=ingress-dns                |                                     |         |         |                     |                     |
	|         | --addons=helm-tiller                |                                     |         |         |                     |                     |
	| addons  | addons-20220509082454-6723          | addons-20220509082454-6723          | jenkins | v1.25.2 | 09 May 22 08:26 UTC | 09 May 22 08:26 UTC |
	|         | addons disable helm-tiller          |                                     |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                                     |         |         |                     |                     |
	| ip      | addons-20220509082454-6723 ip       | addons-20220509082454-6723          | jenkins | v1.25.2 | 09 May 22 08:26 UTC | 09 May 22 08:26 UTC |
	| addons  | addons-20220509082454-6723          | addons-20220509082454-6723          | jenkins | v1.25.2 | 09 May 22 08:26 UTC | 09 May 22 08:26 UTC |
	|         | addons disable registry             |                                     |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                                     |         |         |                     |                     |
	| ssh     | addons-20220509082454-6723 ssh      | addons-20220509082454-6723          | jenkins | v1.25.2 | 09 May 22 08:26 UTC | 09 May 22 08:26 UTC |
	|         | curl -s http://127.0.0.1/ -H        |                                     |         |         |                     |                     |
	|         | 'Host: nginx.example.com'           |                                     |         |         |                     |                     |
	| ip      | addons-20220509082454-6723 ip       | addons-20220509082454-6723          | jenkins | v1.25.2 | 09 May 22 08:26 UTC | 09 May 22 08:26 UTC |
	| addons  | addons-20220509082454-6723          | addons-20220509082454-6723          | jenkins | v1.25.2 | 09 May 22 08:26 UTC | 09 May 22 08:26 UTC |
	|         | addons disable ingress-dns          |                                     |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                                     |         |         |                     |                     |
	| addons  | addons-20220509082454-6723          | addons-20220509082454-6723          | jenkins | v1.25.2 | 09 May 22 08:27 UTC | 09 May 22 08:27 UTC |
	|         | addons disable                      |                                     |         |         |                     |                     |
	|         | csi-hostpath-driver                 |                                     |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                                     |         |         |                     |                     |
	| addons  | addons-20220509082454-6723          | addons-20220509082454-6723          | jenkins | v1.25.2 | 09 May 22 08:27 UTC | 09 May 22 08:27 UTC |
	|         | addons disable volumesnapshots      |                                     |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                                     |         |         |                     |                     |
	| addons  | addons-20220509082454-6723          | addons-20220509082454-6723          | jenkins | v1.25.2 | 09 May 22 08:31 UTC | 09 May 22 08:31 UTC |
	|         | addons disable metrics-server       |                                     |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                                     |         |         |                     |                     |
	|---------|-------------------------------------|-------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/09 08:24:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0509 08:24:54.118809    7953 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:24:54.118943    7953 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:24:54.118954    7953 out.go:309] Setting ErrFile to fd 2...
	I0509 08:24:54.118962    7953 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:24:54.119081    7953 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:24:54.119413    7953 out.go:303] Setting JSON to false
	I0509 08:24:54.120176    7953 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":448,"bootTime":1652084246,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 08:24:54.120251    7953 start.go:125] virtualization: kvm guest
	I0509 08:24:54.123272    7953 out.go:177] * [addons-20220509082454-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0509 08:24:54.125087    7953 out.go:177]   - MINIKUBE_LOCATION=14070
	I0509 08:24:54.125011    7953 notify.go:193] Checking for updates...
	I0509 08:24:54.126898    7953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 08:24:54.128644    7953 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:24:54.130362    7953 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 08:24:54.132204    7953 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0509 08:24:54.133795    7953 driver.go:346] Setting default libvirt URI to qemu:///system
	I0509 08:24:54.169770    7953 docker.go:137] docker version: linux-20.10.15
	I0509 08:24:54.169887    7953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:24:54.270529    7953 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-05-09 08:24:54.196326474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:24:54.270642    7953 docker.go:254] overlay module found
	I0509 08:24:54.273202    7953 out.go:177] * Using the docker driver based on user configuration
	I0509 08:24:54.274743    7953 start.go:284] selected driver: docker
	I0509 08:24:54.274762    7953 start.go:801] validating driver "docker" against <nil>
	I0509 08:24:54.274783    7953 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0509 08:24:54.274830    7953 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0509 08:24:54.274851    7953 out.go:239] ! Your cgroup does not allow setting memory.
	I0509 08:24:54.276398    7953 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0509 08:24:54.278393    7953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:24:54.381501    7953 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-05-09 08:24:54.30564108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:24:54.381615    7953 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0509 08:24:54.381824    7953 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0509 08:24:54.384197    7953 out.go:177] * Using Docker driver with the root privilege
	I0509 08:24:54.385691    7953 cni.go:95] Creating CNI manager for ""
	I0509 08:24:54.385710    7953 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0509 08:24:54.385720    7953 start_flags.go:306] config:
	{Name:addons-20220509082454-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:addons-20220509082454-6723 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0509 08:24:54.387392    7953 out.go:177] * Starting control plane node addons-20220509082454-6723 in cluster addons-20220509082454-6723
	I0509 08:24:54.388757    7953 cache.go:120] Beginning downloading kic base image for docker with docker
	I0509 08:24:54.390110    7953 out.go:177] * Pulling base image ...
	I0509 08:24:54.391439    7953 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 08:24:54.391480    7953 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4
	I0509 08:24:54.391497    7953 cache.go:57] Caching tarball of preloaded images
	I0509 08:24:54.391553    7953 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0509 08:24:54.391770    7953 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0509 08:24:54.391785    7953 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.0 on docker
	I0509 08:24:54.392125    7953 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/config.json ...
	I0509 08:24:54.392155    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/config.json: {Name:mk4f9a6b51edc3b0597ae7eb653e84e87edcd941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:24:54.432960    7953 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0509 08:24:54.433002    7953 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0509 08:24:54.433018    7953 cache.go:206] Successfully downloaded all kic artifacts
	I0509 08:24:54.433059    7953 start.go:352] acquiring machines lock for addons-20220509082454-6723: {Name:mk2c77102f2b9de45294dd0c7a8b5e6c80a2d64d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:24:54.433211    7953 start.go:356] acquired machines lock for "addons-20220509082454-6723" in 128.161µs
	I0509 08:24:54.433245    7953 start.go:91] Provisioning new machine with config: &{Name:addons-20220509082454-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:addons-20220509082454-6723 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false} &{Name: IP: Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0509 08:24:54.433388    7953 start.go:131] createHost starting for "" (driver="docker")
	I0509 08:24:54.436013    7953 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0509 08:24:54.436240    7953 start.go:165] libmachine.API.Create for "addons-20220509082454-6723" (driver="docker")
	I0509 08:24:54.436273    7953 client.go:168] LocalClient.Create starting
	I0509 08:24:54.436419    7953 main.go:134] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem
	I0509 08:24:54.567821    7953 main.go:134] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem
	I0509 08:24:54.949436    7953 cli_runner.go:164] Run: docker network inspect addons-20220509082454-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0509 08:24:54.978691    7953 cli_runner.go:211] docker network inspect addons-20220509082454-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0509 08:24:54.978772    7953 network_create.go:272] running [docker network inspect addons-20220509082454-6723] to gather additional debugging logs...
	I0509 08:24:54.978794    7953 cli_runner.go:164] Run: docker network inspect addons-20220509082454-6723
	W0509 08:24:55.007238    7953 cli_runner.go:211] docker network inspect addons-20220509082454-6723 returned with exit code 1
	I0509 08:24:55.007280    7953 network_create.go:275] error running [docker network inspect addons-20220509082454-6723]: docker network inspect addons-20220509082454-6723: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220509082454-6723
	I0509 08:24:55.007304    7953 network_create.go:277] output of [docker network inspect addons-20220509082454-6723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220509082454-6723
	
	** /stderr **
	I0509 08:24:55.007370    7953 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0509 08:24:55.037051    7953 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00011c328] misses:0}
	I0509 08:24:55.037102    7953 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0509 08:24:55.037118    7953 network_create.go:115] attempt to create docker network addons-20220509082454-6723 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0509 08:24:55.037169    7953 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220509082454-6723
	I0509 08:24:55.102162    7953 network_create.go:99] docker network addons-20220509082454-6723 192.168.49.0/24 created
	I0509 08:24:55.102203    7953 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20220509082454-6723" container
	I0509 08:24:55.102256    7953 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0509 08:24:55.131781    7953 cli_runner.go:164] Run: docker volume create addons-20220509082454-6723 --label name.minikube.sigs.k8s.io=addons-20220509082454-6723 --label created_by.minikube.sigs.k8s.io=true
	I0509 08:24:55.162825    7953 oci.go:103] Successfully created a docker volume addons-20220509082454-6723
	I0509 08:24:55.162928    7953 cli_runner.go:164] Run: docker run --rm --name addons-20220509082454-6723-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220509082454-6723 --entrypoint /usr/bin/test -v addons-20220509082454-6723:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0509 08:25:04.283381    7953 cli_runner.go:217] Completed: docker run --rm --name addons-20220509082454-6723-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220509082454-6723 --entrypoint /usr/bin/test -v addons-20220509082454-6723:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (9.120401202s)
	I0509 08:25:04.283410    7953 oci.go:107] Successfully prepared a docker volume addons-20220509082454-6723
	I0509 08:25:04.283442    7953 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 08:25:04.283469    7953 kic.go:179] Starting extracting preloaded images to volume ...
	I0509 08:25:04.283532    7953 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20220509082454-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0509 08:25:10.951138    7953 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20220509082454-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (6.66755109s)
	I0509 08:25:10.951170    7953 kic.go:188] duration metric: took 6.667698 seconds to extract preloaded images to volume
	W0509 08:25:10.951213    7953 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0509 08:25:10.951225    7953 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0509 08:25:10.951283    7953 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0509 08:25:11.052965    7953 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20220509082454-6723 --name addons-20220509082454-6723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220509082454-6723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20220509082454-6723 --network addons-20220509082454-6723 --ip 192.168.49.2 --volume addons-20220509082454-6723:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0509 08:25:11.492022    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Running}}
	I0509 08:25:11.524500    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:11.557976    7953 cli_runner.go:164] Run: docker exec addons-20220509082454-6723 stat /var/lib/dpkg/alternatives/iptables
	I0509 08:25:11.643411    7953 oci.go:279] the created container "addons-20220509082454-6723" has a running status.
	I0509 08:25:11.643458    7953 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa...
	I0509 08:25:11.852242    7953 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0509 08:25:11.936137    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:11.972132    7953 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0509 08:25:11.972164    7953 kic_runner.go:114] Args: [docker exec --privileged addons-20220509082454-6723 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0509 08:25:12.058937    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:12.093005    7953 machine.go:88] provisioning docker machine ...
	I0509 08:25:12.093055    7953 ubuntu.go:169] provisioning hostname "addons-20220509082454-6723"
	I0509 08:25:12.093124    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:12.127175    7953 main.go:134] libmachine: Using SSH client type: native
	I0509 08:25:12.127353    7953 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49157 <nil> <nil>}
	I0509 08:25:12.127376    7953 main.go:134] libmachine: About to run SSH command:
	sudo hostname addons-20220509082454-6723 && echo "addons-20220509082454-6723" | sudo tee /etc/hostname
	I0509 08:25:12.286593    7953 main.go:134] libmachine: SSH cmd err, output: <nil>: addons-20220509082454-6723
	
	I0509 08:25:12.286693    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:12.323287    7953 main.go:134] libmachine: Using SSH client type: native
	I0509 08:25:12.323461    7953 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49157 <nil> <nil>}
	I0509 08:25:12.323495    7953 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20220509082454-6723' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20220509082454-6723/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20220509082454-6723' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0509 08:25:12.444658    7953 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0509 08:25:12.444694    7953 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube}
	I0509 08:25:12.444747    7953 ubuntu.go:177] setting up certificates
	I0509 08:25:12.444763    7953 provision.go:83] configureAuth start
	I0509 08:25:12.444820    7953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220509082454-6723
	I0509 08:25:12.481585    7953 provision.go:138] copyHostCerts
	I0509 08:25:12.481660    7953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem (1078 bytes)
	I0509 08:25:12.481770    7953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem (1123 bytes)
	I0509 08:25:12.481845    7953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem (1679 bytes)
	I0509 08:25:12.481901    7953 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem org=jenkins.addons-20220509082454-6723 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20220509082454-6723]
	I0509 08:25:12.584131    7953 provision.go:172] copyRemoteCerts
	I0509 08:25:12.584185    7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0509 08:25:12.584232    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:12.616710    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:12.703930    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0509 08:25:12.726348    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0509 08:25:12.744883    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0509 08:25:12.763117    7953 provision.go:86] duration metric: configureAuth took 318.336767ms
	I0509 08:25:12.763153    7953 ubuntu.go:193] setting minikube options for container-runtime
	I0509 08:25:12.763301    7953 config.go:178] Loaded profile config "addons-20220509082454-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:25:12.763355    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:12.795916    7953 main.go:134] libmachine: Using SSH client type: native
	I0509 08:25:12.796054    7953 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49157 <nil> <nil>}
	I0509 08:25:12.796067    7953 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0509 08:25:12.918305    7953 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0509 08:25:12.918328    7953 ubuntu.go:71] root file system type: overlay
	I0509 08:25:12.918454    7953 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0509 08:25:12.918504    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:12.948278    7953 main.go:134] libmachine: Using SSH client type: native
	I0509 08:25:12.948424    7953 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49157 <nil> <nil>}
	I0509 08:25:12.948484    7953 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0509 08:25:13.081710    7953 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0509 08:25:13.081781    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:13.111973    7953 main.go:134] libmachine: Using SSH client type: native
	I0509 08:25:13.112126    7953 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49157 <nil> <nil>}
	I0509 08:25:13.112153    7953 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0509 08:25:13.800969    7953 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-09 08:25:13.073994334 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0509 08:25:13.800999    7953 machine.go:91] provisioned docker machine in 1.707968205s
	I0509 08:25:13.801009    7953 client.go:171] LocalClient.Create took 19.364726085s
	I0509 08:25:13.801029    7953 start.go:173] duration metric: libmachine.API.Create for "addons-20220509082454-6723" took 19.364788711s
	I0509 08:25:13.801041    7953 start.go:306] post-start starting for "addons-20220509082454-6723" (driver="docker")
	I0509 08:25:13.801050    7953 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0509 08:25:13.801109    7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0509 08:25:13.801148    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:13.832810    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:13.921976    7953 ssh_runner.go:195] Run: cat /etc/os-release
	I0509 08:25:13.924827    7953 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0509 08:25:13.924855    7953 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0509 08:25:13.924864    7953 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0509 08:25:13.924871    7953 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0509 08:25:13.924880    7953 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/addons for local assets ...
	I0509 08:25:13.924936    7953 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files for local assets ...
	I0509 08:25:13.924958    7953 start.go:309] post-start completed in 123.910765ms
	I0509 08:25:13.925240    7953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220509082454-6723
	I0509 08:25:13.955555    7953 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/config.json ...
	I0509 08:25:13.955802    7953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 08:25:13.955842    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:13.986250    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:14.069076    7953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0509 08:25:14.073030    7953 start.go:134] duration metric: createHost completed in 19.639627472s
	I0509 08:25:14.073058    7953 start.go:81] releasing machines lock for "addons-20220509082454-6723", held for 19.639829387s
	I0509 08:25:14.073144    7953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220509082454-6723
	I0509 08:25:14.103542    7953 ssh_runner.go:195] Run: systemctl --version
	I0509 08:25:14.103564    7953 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0509 08:25:14.103617    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:14.103630    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:14.138449    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:14.138604    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:14.225066    7953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0509 08:25:14.317947    7953 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0509 08:25:14.327731    7953 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0509 08:25:14.327785    7953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0509 08:25:14.338061    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0509 08:25:14.350854    7953 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0509 08:25:14.428165    7953 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0509 08:25:14.502875    7953 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0509 08:25:14.512035    7953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0509 08:25:14.590929    7953 ssh_runner.go:195] Run: sudo systemctl start docker
	I0509 08:25:14.600754    7953 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0509 08:25:14.681521    7953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0509 08:25:14.756977    7953 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0509 08:25:14.768683    7953 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0509 08:25:14.768799    7953 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:25:14.772016    7953 start.go:468] Will wait 60s for crictl version
	I0509 08:25:14.772077    7953 ssh_runner.go:195] Run: sudo crictl version
	I0509 08:25:15.112238    7953 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.13
	RuntimeApiVersion:  1.41.0
	I0509 08:25:15.112307    7953 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0509 08:25:15.287001    7953 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0509 08:25:15.332488    7953 out.go:204] * Preparing Kubernetes v1.24.0 on Docker 20.10.13 ...
	I0509 08:25:15.332580    7953 cli_runner.go:164] Run: docker network inspect addons-20220509082454-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0509 08:25:15.362466    7953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0509 08:25:15.365713    7953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0509 08:25:15.375143    7953 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 08:25:15.375201    7953 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0509 08:25:15.407812    7953 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.0
	k8s.gcr.io/kube-proxy:v1.24.0
	k8s.gcr.io/kube-controller-manager:v1.24.0
	k8s.gcr.io/kube-scheduler:v1.24.0
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0509 08:25:15.407841    7953 docker.go:541] Images already preloaded, skipping extraction
	I0509 08:25:15.407898    7953 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0509 08:25:15.439906    7953 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.0
	k8s.gcr.io/kube-proxy:v1.24.0
	k8s.gcr.io/kube-scheduler:v1.24.0
	k8s.gcr.io/kube-controller-manager:v1.24.0
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0509 08:25:15.439932    7953 cache_images.go:84] Images are preloaded, skipping loading
	I0509 08:25:15.439999    7953 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0509 08:25:15.789118    7953 cni.go:95] Creating CNI manager for ""
	I0509 08:25:15.789153    7953 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0509 08:25:15.789166    7953 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0509 08:25:15.789186    7953 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.24.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20220509082454-6723 NodeName:addons-20220509082454-6723 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0509 08:25:15.789360    7953 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "addons-20220509082454-6723"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0509 08:25:15.789462    7953 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=addons-20220509082454-6723 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.0 ClusterName:addons-20220509082454-6723 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0509 08:25:15.789524    7953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.0
	I0509 08:25:15.799590    7953 binaries.go:44] Found k8s binaries, skipping transfer
	I0509 08:25:15.799646    7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0509 08:25:15.806969    7953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
	I0509 08:25:15.820895    7953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0509 08:25:15.834161    7953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0509 08:25:15.847842    7953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0509 08:25:15.851153    7953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0509 08:25:15.861328    7953 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723 for IP: 192.168.49.2
	I0509 08:25:15.861377    7953 certs.go:187] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.key
	I0509 08:25:15.934618    7953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.crt ...
	I0509 08:25:15.934655    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.crt: {Name:mkd9a442b54ebc8753a00e898a3aa6675d6e3bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:15.934851    7953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.key ...
	I0509 08:25:15.934865    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.key: {Name:mk4abe997e1fc503b1f93d4acce34cf67b343de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:15.934951    7953 certs.go:187] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/proxy-client-ca.key
	I0509 08:25:15.999282    7953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/proxy-client-ca.crt ...
	I0509 08:25:15.999317    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/proxy-client-ca.crt: {Name:mk10787a7bb55bbe56945a658ffa72ebe047c5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:15.999497    7953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/proxy-client-ca.key ...
	I0509 08:25:15.999512    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/proxy-client-ca.key: {Name:mkf8556c61c00c9156ee96aca56d9ede55b2e5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:15.999624    7953 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.key
	I0509 08:25:15.999640    7953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt with IP's: []
	I0509 08:25:16.407614    7953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt ...
	I0509 08:25:16.407652    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: {Name:mk99f7ec019fd403d05758bb11043bad30d717a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:16.407854    7953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.key ...
	I0509 08:25:16.407867    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.key: {Name:mk25c854c8691ee49f00d1befb6ab0410524ad27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:16.407946    7953 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.key.dd3b5fb2
	I0509 08:25:16.407964    7953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0509 08:25:16.539779    7953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.crt.dd3b5fb2 ...
	I0509 08:25:16.539818    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.crt.dd3b5fb2: {Name:mk2f69a255730a5f04d2379d842ceda4e860880a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:16.540024    7953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.key.dd3b5fb2 ...
	I0509 08:25:16.540037    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.key.dd3b5fb2: {Name:mk03bb23969627b95ac347f9fda7e84c9c656f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:16.540116    7953 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.crt
	I0509 08:25:16.540181    7953 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.key
	I0509 08:25:16.540223    7953 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/proxy-client.key
	I0509 08:25:16.540239    7953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/proxy-client.crt with IP's: []
	I0509 08:25:16.649442    7953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/proxy-client.crt ...
	I0509 08:25:16.649479    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/proxy-client.crt: {Name:mk07f3cf25782f21460b338ac960624baff30ea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:16.649667    7953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/proxy-client.key ...
	I0509 08:25:16.649681    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/proxy-client.key: {Name:mk0622327bc130f7a5afca45d666bef34296193f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:16.649839    7953 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem (1679 bytes)
	I0509 08:25:16.649875    7953 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem (1078 bytes)
	I0509 08:25:16.649902    7953 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem (1123 bytes)
	I0509 08:25:16.649927    7953 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem (1679 bytes)
	I0509 08:25:16.650463    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0509 08:25:16.670109    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0509 08:25:16.689128    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0509 08:25:16.707982    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0509 08:25:16.726298    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0509 08:25:16.744045    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0509 08:25:16.761903    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0509 08:25:16.780129    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0509 08:25:16.798322    7953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0509 08:25:16.817289    7953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0509 08:25:16.831594    7953 ssh_runner.go:195] Run: openssl version
	I0509 08:25:16.842019    7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0509 08:25:16.853169    7953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0509 08:25:16.856521    7953 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May  9 08:25 /usr/share/ca-certificates/minikubeCA.pem
	I0509 08:25:16.856575    7953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0509 08:25:16.861857    7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0509 08:25:16.869985    7953 kubeadm.go:391] StartCluster: {Name:addons-20220509082454-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:addons-20220509082454-6723 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0509 08:25:16.870123    7953 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0509 08:25:16.901686    7953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0509 08:25:16.909239    7953 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0509 08:25:16.916728    7953 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0509 08:25:16.916791    7953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0509 08:25:16.923716    7953 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0509 08:25:16.923761    7953 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0509 08:25:30.984664    7953 out.go:204]   - Generating certificates and keys ...
	I0509 08:25:30.988154    7953 out.go:204]   - Booting up control plane ...
	I0509 08:25:30.991175    7953 out.go:204]   - Configuring RBAC rules ...
	I0509 08:25:30.993142    7953 cni.go:95] Creating CNI manager for ""
	I0509 08:25:30.993166    7953 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0509 08:25:30.993206    7953 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0509 08:25:30.993258    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:30.993283    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=3bac68e23e7013f03af5baca398608c8c8001fab minikube.k8s.io/name=addons-20220509082454-6723 minikube.k8s.io/updated_at=2022_05_09T08_25_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:31.007573    7953 ops.go:34] apiserver oom_adj: -16
	I0509 08:25:31.392053    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:31.946815    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:32.446289    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:32.946190    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:33.446516    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:33.946883    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:34.446977    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:34.947170    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:35.447227    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:35.946798    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:36.446302    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:36.946909    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:37.446296    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:37.946221    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:38.446511    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:38.946920    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:39.446535    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:39.946289    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:40.447128    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:40.946215    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:41.447109    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:41.946595    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:42.446251    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:42.946586    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:43.446351    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:43.946353    7953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 08:25:44.005141    7953 kubeadm.go:1020] duration metric: took 13.011928695s to wait for elevateKubeSystemPrivileges.
	I0509 08:25:44.005170    7953 kubeadm.go:393] StartCluster complete in 27.135195316s
	I0509 08:25:44.005192    7953 settings.go:142] acquiring lock: {Name:mk0059ab96b71199ca0a558b9bc695696bca2ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:44.005351    7953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:25:44.005808    7953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig: {Name:mk1330d0f99a2286cbe8cc1ffbe430ce56d1dfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:25:44.522538    7953 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20220509082454-6723" rescaled to 1
	I0509 08:25:44.522602    7953 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0509 08:25:44.525044    7953 out.go:177] * Verifying Kubernetes components...
	I0509 08:25:44.522636    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0509 08:25:44.522659    7953 addons.go:415] enableAddons start: toEnable=map[], additional=[registry metrics-server volumesnapshots csi-hostpath-driver gcp-auth ingress ingress-dns helm-tiller]
	I0509 08:25:44.522815    7953 config.go:178] Loaded profile config "addons-20220509082454-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:25:44.526758    7953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0509 08:25:44.526807    7953 addons.go:65] Setting ingress=true in profile "addons-20220509082454-6723"
	I0509 08:25:44.526821    7953 addons.go:65] Setting ingress-dns=true in profile "addons-20220509082454-6723"
	I0509 08:25:44.526835    7953 addons.go:153] Setting addon ingress-dns=true in "addons-20220509082454-6723"
	I0509 08:25:44.526835    7953 addons.go:153] Setting addon ingress=true in "addons-20220509082454-6723"
	I0509 08:25:44.526852    7953 addons.go:65] Setting registry=true in profile "addons-20220509082454-6723"
	I0509 08:25:44.526874    7953 addons.go:153] Setting addon registry=true in "addons-20220509082454-6723"
	I0509 08:25:44.526904    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:44.526916    7953 addons.go:65] Setting csi-hostpath-driver=true in profile "addons-20220509082454-6723"
	I0509 08:25:44.526920    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:44.526933    7953 addons.go:65] Setting metrics-server=true in profile "addons-20220509082454-6723"
	I0509 08:25:44.526946    7953 addons.go:153] Setting addon csi-hostpath-driver=true in "addons-20220509082454-6723"
	I0509 08:25:44.526950    7953 addons.go:153] Setting addon metrics-server=true in "addons-20220509082454-6723"
	I0509 08:25:44.526970    7953 addons.go:65] Setting gcp-auth=true in profile "addons-20220509082454-6723"
	I0509 08:25:44.526986    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:44.526990    7953 mustload.go:65] Loading cluster: addons-20220509082454-6723
	I0509 08:25:44.526989    7953 addons.go:65] Setting storage-provisioner=true in profile "addons-20220509082454-6723"
	I0509 08:25:44.527016    7953 addons.go:153] Setting addon storage-provisioner=true in "addons-20220509082454-6723"
	W0509 08:25:44.527025    7953 addons.go:165] addon storage-provisioner should already be in state true
	I0509 08:25:44.526905    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:44.527041    7953 addons.go:65] Setting helm-tiller=true in profile "addons-20220509082454-6723"
	I0509 08:25:44.527064    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:44.527067    7953 addons.go:153] Setting addon helm-tiller=true in "addons-20220509082454-6723"
	I0509 08:25:44.527117    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:44.527151    7953 config.go:178] Loaded profile config "addons-20220509082454-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:25:44.527389    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.527427    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.527455    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.527480    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.527456    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.527548    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.526807    7953 addons.go:65] Setting volumesnapshots=true in profile "addons-20220509082454-6723"
	I0509 08:25:44.527597    7953 addons.go:153] Setting addon volumesnapshots=true in "addons-20220509082454-6723"
	I0509 08:25:44.527632    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:44.527686    7953 addons.go:65] Setting default-storageclass=true in profile "addons-20220509082454-6723"
	I0509 08:25:44.527731    7953 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20220509082454-6723"
	I0509 08:25:44.527816    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.528031    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.528045    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.528173    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:44.528638    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.636693    7953 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0509 08:25:44.638483    7953 addons.go:348] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0509 08:25:44.638508    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0509 08:25:44.638583    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:44.640226    7953 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0509 08:25:44.641985    7953 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0509 08:25:44.642009    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0509 08:25:44.642075    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:44.657142    7953 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0509 08:25:44.659060    7953 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0509 08:25:44.662141    7953 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.6.1
	I0509 08:25:44.663996    7953 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0509 08:25:44.664015    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0509 08:25:44.664076    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:44.662735    7953 addons.go:153] Setting addon default-storageclass=true in "addons-20220509082454-6723"
	W0509 08:25:44.664228    7953 addons.go:165] addon default-storageclass should already be in state true
	I0509 08:25:44.664262    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:44.668167    7953 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0509 08:25:44.668190    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0509 08:25:44.668248    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:44.661934    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:44.659066    7953 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0509 08:25:44.671053    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0509 08:25:44.671134    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:44.664899    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:44.673237    7953 out.go:177]   - Using image registry:2.7.1
	I0509 08:25:44.675721    7953 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0509 08:25:44.697408    7953 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0509 08:25:44.696928    7953 addons.go:348] installing /etc/kubernetes/addons/registry-rc.yaml
	I0509 08:25:44.704947    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0509 08:25:44.707276    7953 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0509 08:25:44.705008    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:44.716545    7953 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0509 08:25:44.716516    7953 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
	I0509 08:25:44.718671    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:44.721673    7953 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0509 08:25:44.723562    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:44.726427    7953 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0509 08:25:44.726551    7953 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v1.2.0
	I0509 08:25:44.729932    7953 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
	I0509 08:25:44.729870    7953 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0509 08:25:44.731777    7953 addons.go:348] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0509 08:25:44.733235    7953 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0509 08:25:44.733253    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15567 bytes)
	I0509 08:25:44.735992    7953 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0509 08:25:44.736037    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:44.738834    7953 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0509 08:25:44.737139    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:44.742206    7953 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0509 08:25:44.740876    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:44.743676    7953 addons.go:348] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0509 08:25:44.743697    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0509 08:25:44.743756    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:44.744020    7953 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0509 08:25:44.744039    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0509 08:25:44.744084    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:44.754590    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:44.760515    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:44.785929    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:44.798589    7953 node_ready.go:35] waiting up to 6m0s for node "addons-20220509082454-6723" to be "Ready" ...
	I0509 08:25:44.799038    7953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0509 08:25:44.800030    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:44.803977    7953 node_ready.go:49] node "addons-20220509082454-6723" has status "Ready":"True"
	I0509 08:25:44.803999    7953 node_ready.go:38] duration metric: took 5.376444ms waiting for node "addons-20220509082454-6723" to be "Ready" ...
	I0509 08:25:44.804009    7953 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0509 08:25:44.804580    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:44.813123    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:44.819897    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:44.865042    7953 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-pskmq" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:45.091940    7953 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0509 08:25:45.091972    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0509 08:25:45.264659    7953 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0509 08:25:45.272641    7953 addons.go:348] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0509 08:25:45.272674    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0509 08:25:45.278915    7953 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0509 08:25:45.278946    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0509 08:25:45.361665    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0509 08:25:45.361978    7953 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0509 08:25:45.362001    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0509 08:25:45.362324    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0509 08:25:45.363389    7953 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0509 08:25:45.363412    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0509 08:25:45.366059    7953 addons.go:153] Setting addon gcp-auth=true in "addons-20220509082454-6723"
	I0509 08:25:45.366116    7953 host.go:66] Checking if "addons-20220509082454-6723" exists ...
	I0509 08:25:45.366632    7953 cli_runner.go:164] Run: docker container inspect addons-20220509082454-6723 --format={{.State.Status}}
	I0509 08:25:45.367208    7953 addons.go:348] installing /etc/kubernetes/addons/registry-svc.yaml
	I0509 08:25:45.367232    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0509 08:25:45.377336    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0509 08:25:45.378542    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0509 08:25:45.384379    7953 addons.go:348] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0509 08:25:45.384406    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0509 08:25:45.407165    7953 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0509 08:25:45.407220    7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220509082454-6723
	I0509 08:25:45.439238    7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/addons-20220509082454-6723/id_rsa Username:docker}
	I0509 08:25:45.468267    7953 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0509 08:25:45.468300    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0509 08:25:45.469261    7953 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0509 08:25:45.469283    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0509 08:25:45.563608    7953 addons.go:348] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0509 08:25:45.563631    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0509 08:25:45.568599    7953 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0509 08:25:45.568645    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0509 08:25:45.583156    7953 addons.go:348] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0509 08:25:45.583185    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0509 08:25:45.666170    7953 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0509 08:25:45.666196    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0509 08:25:45.774401    7953 addons.go:348] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0509 08:25:45.774435    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0509 08:25:45.777982    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0509 08:25:45.782209    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0509 08:25:45.860762    7953 addons.go:348] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0509 08:25:45.860847    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0509 08:25:45.869901    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0509 08:25:45.884211    7953 addons.go:348] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0509 08:25:45.884238    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0509 08:25:45.962492    7953 addons.go:348] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0509 08:25:45.962519    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0509 08:25:46.060685    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0509 08:25:46.067980    7953 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0509 08:25:46.068010    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0509 08:25:46.173073    7953 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0509 08:25:46.173104    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0509 08:25:46.274280    7953 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0509 08:25:46.274310    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0509 08:25:46.365815    7953 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0509 08:25:46.365842    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0509 08:25:46.381678    7953 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0509 08:25:46.381702    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0509 08:25:46.396865    7953 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0509 08:25:46.396886    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0509 08:25:46.411575    7953 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0509 08:25:46.411597    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0509 08:25:46.464283    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0509 08:25:46.966755    7953 pod_ready.go:102] pod "coredns-6d4b75cb6d-pskmq" in "kube-system" namespace has status "Ready":"False"
	I0509 08:25:47.662463    7953 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.863389743s)
	I0509 08:25:47.662501    7953 start.go:783] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0509 08:25:47.880063    7953 pod_ready.go:92] pod "coredns-6d4b75cb6d-pskmq" in "kube-system" namespace has status "Ready":"True"
	I0509 08:25:47.880146    7953 pod_ready.go:81] duration metric: took 3.015061159s waiting for pod "coredns-6d4b75cb6d-pskmq" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:47.880172    7953 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-qcc2z" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:47.881711    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.520006129s)
	I0509 08:25:47.967799    7953 pod_ready.go:92] pod "coredns-6d4b75cb6d-qcc2z" in "kube-system" namespace has status "Ready":"True"
	I0509 08:25:47.967878    7953 pod_ready.go:81] duration metric: took 87.679395ms waiting for pod "coredns-6d4b75cb6d-qcc2z" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:47.967905    7953 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20220509082454-6723" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:47.980463    7953 pod_ready.go:92] pod "etcd-addons-20220509082454-6723" in "kube-system" namespace has status "Ready":"True"
	I0509 08:25:47.980491    7953 pod_ready.go:81] duration metric: took 12.56877ms waiting for pod "etcd-addons-20220509082454-6723" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:47.980517    7953 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20220509082454-6723" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:48.063826    7953 pod_ready.go:92] pod "kube-apiserver-addons-20220509082454-6723" in "kube-system" namespace has status "Ready":"True"
	I0509 08:25:48.063859    7953 pod_ready.go:81] duration metric: took 83.332412ms waiting for pod "kube-apiserver-addons-20220509082454-6723" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:48.063873    7953 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20220509082454-6723" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:48.081212    7953 pod_ready.go:92] pod "kube-controller-manager-addons-20220509082454-6723" in "kube-system" namespace has status "Ready":"True"
	I0509 08:25:48.081307    7953 pod_ready.go:81] duration metric: took 17.422617ms waiting for pod "kube-controller-manager-addons-20220509082454-6723" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:48.081351    7953 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8ch85" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:48.283989    7953 pod_ready.go:92] pod "kube-proxy-8ch85" in "kube-system" namespace has status "Ready":"True"
	I0509 08:25:48.284015    7953 pod_ready.go:81] duration metric: took 202.644254ms waiting for pod "kube-proxy-8ch85" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:48.284029    7953 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20220509082454-6723" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:48.683699    7953 pod_ready.go:92] pod "kube-scheduler-addons-20220509082454-6723" in "kube-system" namespace has status "Ready":"True"
	I0509 08:25:48.683733    7953 pod_ready.go:81] duration metric: took 399.695438ms waiting for pod "kube-scheduler-addons-20220509082454-6723" in "kube-system" namespace to be "Ready" ...
	I0509 08:25:48.683744    7953 pod_ready.go:38] duration metric: took 3.879721678s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0509 08:25:48.683770    7953 api_server.go:51] waiting for apiserver process to appear ...
	I0509 08:25:48.683812    7953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0509 08:25:49.572904    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.210536532s)
	I0509 08:25:49.572944    7953 addons.go:386] Verifying addon ingress=true in "addons-20220509082454-6723"
	I0509 08:25:49.572949    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.195562318s)
	I0509 08:25:49.573039    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.194473257s)
	I0509 08:25:49.573106    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.795033273s)
	I0509 08:25:49.575061    7953 out.go:177] * Verifying ingress addon...
	I0509 08:25:49.573237    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.790977557s)
	I0509 08:25:49.573279    7953 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.166087519s)
	I0509 08:25:49.573326    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.703396945s)
	I0509 08:25:49.573436    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.512712148s)
	W0509 08:25:49.577341    7953 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0509 08:25:49.577371    7953 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0509 08:25:49.577412    7953 addons.go:386] Verifying addon metrics-server=true in "addons-20220509082454-6723"
	I0509 08:25:49.577465    7953 addons.go:386] Verifying addon registry=true in "addons-20220509082454-6723"
	I0509 08:25:49.580351    7953 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
	I0509 08:25:49.578207    7953 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0509 08:25:49.582165    7953 out.go:177] * Verifying registry addon...
	I0509 08:25:49.585228    7953 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0509 08:25:49.587076    7953 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.8
	I0509 08:25:49.586209    7953 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0509 08:25:49.588763    7953 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0509 08:25:49.587158    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:49.588801    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0509 08:25:49.590740    7953 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0509 08:25:49.590768    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:49.686473    7953 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0509 08:25:49.686502    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0509 08:25:49.783020    7953 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0509 08:25:49.783047    7953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4842 bytes)
	I0509 08:25:49.854163    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0509 08:25:49.967115    7953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0509 08:25:50.176316    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:50.179168    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:50.669535    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:50.670796    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:50.676590    7953 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.992749673s)
	I0509 08:25:50.676652    7953 api_server.go:71] duration metric: took 6.154024095s to wait for apiserver process to appear ...
	I0509 08:25:50.676664    7953 api_server.go:87] waiting for apiserver healthz status ...
	I0509 08:25:50.676678    7953 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0509 08:25:50.677039    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.212707354s)
	I0509 08:25:50.677075    7953 addons.go:386] Verifying addon csi-hostpath-driver=true in "addons-20220509082454-6723"
	I0509 08:25:50.680275    7953 out.go:177] * Verifying csi-hostpath-driver addon...
	I0509 08:25:50.682925    7953 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0509 08:25:50.688372    7953 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0509 08:25:50.761438    7953 api_server.go:140] control plane version: v1.24.0
	I0509 08:25:50.761481    7953 api_server.go:130] duration metric: took 84.799245ms to wait for apiserver health ...
	I0509 08:25:50.761493    7953 system_pods.go:43] waiting for kube-system pods to appear ...
	I0509 08:25:50.763279    7953 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0509 08:25:50.763309    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:50.772200    7953 system_pods.go:59] 20 kube-system pods found
	I0509 08:25:50.772238    7953 system_pods.go:61] "coredns-6d4b75cb6d-pskmq" [a672e2ee-2d2f-4a73-918f-6ac4dc004553] Running
	I0509 08:25:50.772243    7953 system_pods.go:61] "coredns-6d4b75cb6d-qcc2z" [f0867568-f3fb-47c4-b170-7b0799b33e1b] Running
	I0509 08:25:50.772251    7953 system_pods.go:61] "csi-hostpath-attacher-0" [59811fcf-64cc-4090-a191-6b614b2af974] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) didn't match pod affinity rules. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0509 08:25:50.772257    7953 system_pods.go:61] "csi-hostpath-provisioner-0" [7f263878-835f-4f1d-b653-ffd8313013f4] Pending
	I0509 08:25:50.772261    7953 system_pods.go:61] "csi-hostpath-resizer-0" [16fef44e-9db8-4c34-b13f-6b8379ac994f] Pending
	I0509 08:25:50.772265    7953 system_pods.go:61] "csi-hostpath-snapshotter-0" [1102d29b-b164-4fac-bc85-c6c2f01cec06] Pending
	I0509 08:25:50.772270    7953 system_pods.go:61] "csi-hostpathplugin-0" [2409cf91-e015-4768-9da8-499654716767] Pending
	I0509 08:25:50.772277    7953 system_pods.go:61] "etcd-addons-20220509082454-6723" [d22609af-4a35-46b2-bc30-cd8e96ece69a] Running
	I0509 08:25:50.772283    7953 system_pods.go:61] "kube-apiserver-addons-20220509082454-6723" [eca1da2a-fc78-4c09-b3b7-8d03e87dab77] Running
	I0509 08:25:50.772298    7953 system_pods.go:61] "kube-controller-manager-addons-20220509082454-6723" [7027dd6f-f76d-43e1-88dd-2d02c6717189] Running
	I0509 08:25:50.772306    7953 system_pods.go:61] "kube-ingress-dns-minikube" [1b6cc3b0-fb8f-45bc-bada-1c24b3f8a196] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0509 08:25:50.772317    7953 system_pods.go:61] "kube-proxy-8ch85" [16590b4c-b95c-417a-a6ef-9cd8a394f903] Running
	I0509 08:25:50.772328    7953 system_pods.go:61] "kube-scheduler-addons-20220509082454-6723" [20ad36c6-1ca1-42cd-a13d-f864724520b9] Running
	I0509 08:25:50.772346    7953 system_pods.go:61] "metrics-server-5ccbd9cf46-9zwvx" [4e835545-32d0-4e32-abd8-7358d6da7010] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0509 08:25:50.772359    7953 system_pods.go:61] "registry-kqqf6" [32385d42-0e88-434c-9d19-dd305fc24814] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0509 08:25:50.772373    7953 system_pods.go:61] "registry-proxy-bg6kd" [ce2e10d6-c702-46ae-9765-101df2d2ef68] Pending
	I0509 08:25:50.772381    7953 system_pods.go:61] "snapshot-controller-557749dccd-pp4pp" [f73e5506-b698-4743-96a1-07749a6622c6] Pending
	I0509 08:25:50.772394    7953 system_pods.go:61] "snapshot-controller-557749dccd-zs6dp" [87d6ef26-dbec-4a75-a8dd-0617b84e2b81] Pending
	I0509 08:25:50.772407    7953 system_pods.go:61] "storage-provisioner" [88e48a1a-2f4b-443f-b620-36dbd12fa287] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0509 08:25:50.772420    7953 system_pods.go:61] "tiller-deploy-c7d76457b-4zmrh" [f2654535-302c-4b23-ae19-29761a0720e7] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0509 08:25:50.772433    7953 system_pods.go:74] duration metric: took 10.933159ms to wait for pod list to return data ...
	I0509 08:25:50.772447    7953 default_sa.go:34] waiting for default service account to be created ...
	I0509 08:25:50.775570    7953 default_sa.go:45] found service account: "default"
	I0509 08:25:50.775597    7953 default_sa.go:55] duration metric: took 3.140182ms for default service account to be created ...
	I0509 08:25:50.775605    7953 system_pods.go:116] waiting for k8s-apps to be running ...
	I0509 08:25:50.784647    7953 system_pods.go:86] 20 kube-system pods found
	I0509 08:25:50.784684    7953 system_pods.go:89] "coredns-6d4b75cb6d-pskmq" [a672e2ee-2d2f-4a73-918f-6ac4dc004553] Running
	I0509 08:25:50.784693    7953 system_pods.go:89] "coredns-6d4b75cb6d-qcc2z" [f0867568-f3fb-47c4-b170-7b0799b33e1b] Running
	I0509 08:25:50.784702    7953 system_pods.go:89] "csi-hostpath-attacher-0" [59811fcf-64cc-4090-a191-6b614b2af974] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) didn't match pod affinity rules. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0509 08:25:50.784710    7953 system_pods.go:89] "csi-hostpath-provisioner-0" [7f263878-835f-4f1d-b653-ffd8313013f4] Pending
	I0509 08:25:50.784730    7953 system_pods.go:89] "csi-hostpath-resizer-0" [16fef44e-9db8-4c34-b13f-6b8379ac994f] Pending
	I0509 08:25:50.784748    7953 system_pods.go:89] "csi-hostpath-snapshotter-0" [1102d29b-b164-4fac-bc85-c6c2f01cec06] Pending
	I0509 08:25:50.784754    7953 system_pods.go:89] "csi-hostpathplugin-0" [2409cf91-e015-4768-9da8-499654716767] Pending
	I0509 08:25:50.784764    7953 system_pods.go:89] "etcd-addons-20220509082454-6723" [d22609af-4a35-46b2-bc30-cd8e96ece69a] Running
	I0509 08:25:50.784772    7953 system_pods.go:89] "kube-apiserver-addons-20220509082454-6723" [eca1da2a-fc78-4c09-b3b7-8d03e87dab77] Running
	I0509 08:25:50.784784    7953 system_pods.go:89] "kube-controller-manager-addons-20220509082454-6723" [7027dd6f-f76d-43e1-88dd-2d02c6717189] Running
	I0509 08:25:50.784793    7953 system_pods.go:89] "kube-ingress-dns-minikube" [1b6cc3b0-fb8f-45bc-bada-1c24b3f8a196] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0509 08:25:50.784804    7953 system_pods.go:89] "kube-proxy-8ch85" [16590b4c-b95c-417a-a6ef-9cd8a394f903] Running
	I0509 08:25:50.784811    7953 system_pods.go:89] "kube-scheduler-addons-20220509082454-6723" [20ad36c6-1ca1-42cd-a13d-f864724520b9] Running
	I0509 08:25:50.784823    7953 system_pods.go:89] "metrics-server-5ccbd9cf46-9zwvx" [4e835545-32d0-4e32-abd8-7358d6da7010] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0509 08:25:50.784832    7953 system_pods.go:89] "registry-kqqf6" [32385d42-0e88-434c-9d19-dd305fc24814] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0509 08:25:50.784842    7953 system_pods.go:89] "registry-proxy-bg6kd" [ce2e10d6-c702-46ae-9765-101df2d2ef68] Pending
	I0509 08:25:50.784848    7953 system_pods.go:89] "snapshot-controller-557749dccd-pp4pp" [f73e5506-b698-4743-96a1-07749a6622c6] Pending
	I0509 08:25:50.784858    7953 system_pods.go:89] "snapshot-controller-557749dccd-zs6dp" [87d6ef26-dbec-4a75-a8dd-0617b84e2b81] Pending
	I0509 08:25:50.784872    7953 system_pods.go:89] "storage-provisioner" [88e48a1a-2f4b-443f-b620-36dbd12fa287] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0509 08:25:50.784892    7953 system_pods.go:89] "tiller-deploy-c7d76457b-4zmrh" [f2654535-302c-4b23-ae19-29761a0720e7] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0509 08:25:50.784901    7953 system_pods.go:126] duration metric: took 9.290621ms to wait for k8s-apps to be running ...
	I0509 08:25:50.784910    7953 system_svc.go:44] waiting for kubelet service to be running ....
	I0509 08:25:50.784957    7953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0509 08:25:51.163576    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:51.164522    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:51.272411    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:51.663121    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:51.664041    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:51.772841    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:52.166692    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:52.167615    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:52.270639    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:52.665897    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:52.666667    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:52.773918    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:53.178436    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:53.178501    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:53.269310    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:53.667093    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:53.667406    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:53.772476    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:54.167310    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:54.169009    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:54.273450    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:54.662598    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:54.664651    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:54.771040    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:54.964677    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.110454114s)
	I0509 08:25:54.964797    7953 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (4.997594852s)
	I0509 08:25:54.965093    7953 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.180117862s)
	I0509 08:25:54.965113    7953 system_svc.go:56] duration metric: took 4.180201161s WaitForService to wait for kubelet.
	I0509 08:25:54.965123    7953 kubeadm.go:548] duration metric: took 10.44249441s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0509 08:25:54.965156    7953 node_conditions.go:102] verifying NodePressure condition ...
	I0509 08:25:54.966788    7953 addons.go:386] Verifying addon gcp-auth=true in "addons-20220509082454-6723"
	I0509 08:25:54.969187    7953 out.go:177] * Verifying gcp-auth addon...
	I0509 08:25:54.967675    7953 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0509 08:25:54.971024    7953 node_conditions.go:123] node cpu capacity is 8
	I0509 08:25:54.971057    7953 node_conditions.go:105] duration metric: took 5.895308ms to run NodePressure ...
	I0509 08:25:54.971079    7953 start.go:213] waiting for startup goroutines ...
	I0509 08:25:54.972062    7953 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0509 08:25:54.974618    7953 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0509 08:25:54.974644    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:25:55.093407    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:55.095489    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:55.270257    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:55.478404    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:25:55.593304    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:55.595925    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:55.769802    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:55.980159    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:25:56.163815    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:56.164551    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:56.270091    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:56.478952    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:25:56.594191    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:56.663094    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:56.769684    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:56.978620    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:25:57.093511    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:57.095857    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:57.269205    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:57.478193    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:25:57.592724    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:57.597599    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:57.770299    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:57.978761    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:25:58.093768    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:58.094850    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:58.269682    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:58.478731    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:25:58.593633    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:58.594914    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:58.769322    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:58.981095    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:25:59.093230    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:59.094873    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:59.269459    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:25:59.478729    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:25:59.593891    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:25:59.595931    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:25:59.770205    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:00.050048    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:00.186282    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:00.186518    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:00.269766    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:00.478077    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:00.592446    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:00.595131    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:00.770112    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:00.978641    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:01.093151    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:01.095865    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:01.268693    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:01.479411    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:01.593469    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:01.594969    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:01.769842    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:01.980403    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:02.093162    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:02.097544    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:02.270162    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:02.478809    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:02.593430    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:02.595557    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:02.770098    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:02.978107    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:03.269928    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:03.270101    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:03.270713    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:03.478316    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:03.593187    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:03.595645    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:03.770393    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:03.979282    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:04.093316    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:04.095331    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:04.270265    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:04.478901    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:04.593509    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:04.595561    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:04.769121    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:04.978405    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:05.094008    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:05.095757    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:05.269535    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:05.478475    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:05.593593    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:05.599803    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:05.770489    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:05.977813    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:06.093927    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:06.164517    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:06.271468    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:06.478552    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:06.593611    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:06.597293    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:06.769441    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:06.978127    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:07.093217    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:07.095679    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:07.269195    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:07.478009    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:07.593636    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:07.595587    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:07.772652    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:07.978700    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:08.093302    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:08.095614    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:08.270284    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:08.478550    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:08.593986    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:08.594899    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:08.769126    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:08.978562    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:09.094805    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:09.097032    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:09.270165    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:09.478299    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:09.593364    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:09.595714    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:09.770032    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:09.978656    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:10.093268    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:10.095314    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:10.269377    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:10.478592    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:10.593632    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:10.595779    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:10.769422    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:10.977946    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:11.165871    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:11.166021    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:11.269916    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:11.478524    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:11.593415    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:11.595547    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0509 08:26:11.770566    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:12.056368    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:12.093928    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:12.095751    7953 kapi.go:108] duration metric: took 22.510517909s to wait for kubernetes.io/minikube-addons=registry ...
	I0509 08:26:12.270101    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:12.478980    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:12.593895    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:12.768551    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:12.979274    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:13.094057    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:13.271399    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:13.478469    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:13.593207    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:13.770087    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:13.977624    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:14.092895    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:14.270319    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:14.478306    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:14.594793    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:14.770489    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:14.978164    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:15.093325    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:15.270584    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:15.478018    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:15.593909    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:15.770275    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:15.978503    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:16.118458    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:16.269452    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:16.477960    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:16.593036    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:16.769898    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:16.978479    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:17.095438    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:17.321347    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:17.478021    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:17.593410    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:17.769359    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:17.979020    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:18.094032    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:18.270094    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:18.478452    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:18.593528    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:18.769098    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:18.978281    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:19.094003    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:19.271688    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:19.479067    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:19.641560    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:19.769626    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:19.978942    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:20.093746    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:20.269515    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:20.478398    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:20.593394    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:20.769938    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:21.018078    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:21.095159    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:21.269837    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:21.478754    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:21.594223    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:21.768969    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:21.978782    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:22.093998    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:22.270490    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:22.478542    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:22.593018    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:22.770541    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:22.978557    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:23.094345    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:23.269932    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:23.478876    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:23.593904    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:23.770363    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:23.978035    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:24.093948    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:24.269603    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:24.478469    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:24.763473    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:24.768983    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:24.978792    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:25.093672    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:25.271031    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:25.478738    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:25.593813    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:25.770003    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:25.978216    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:26.094559    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:26.271295    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:26.479160    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:26.594111    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:26.769540    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:26.978752    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:27.094208    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:27.269356    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:27.478954    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:27.594035    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:27.770069    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:27.978633    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:28.093639    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:28.270184    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:28.478754    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:28.593487    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:28.769776    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:28.978500    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:29.093372    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:29.269738    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:29.478720    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:29.593816    7953 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0509 08:26:29.769695    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:29.978387    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:30.093530    7953 kapi.go:108] duration metric: took 40.515318863s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0509 08:26:30.269181    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:30.478419    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:30.769855    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:30.979172    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:31.269929    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:31.479008    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:31.770458    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:31.978594    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0509 08:26:32.269954    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:32.479150    7953 kapi.go:108] duration metric: took 37.507081238s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0509 08:26:32.481296    7953 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20220509082454-6723 cluster.
	I0509 08:26:32.483274    7953 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0509 08:26:32.484823    7953 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0509 08:26:32.785471    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:33.270735    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:33.768475    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:34.268495    7953 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0509 08:26:34.769169    7953 kapi.go:108] duration metric: took 44.086240045s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0509 08:26:34.772057    7953 out.go:177] * Enabled addons: default-storageclass, ingress-dns, storage-provisioner, helm-tiller, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0509 08:26:34.774014    7953 addons.go:417] enableAddons completed in 50.251349938s
	I0509 08:26:34.812989    7953 start.go:499] kubectl: 1.24.0, cluster: 1.24.0 (minor skew: 0)
	I0509 08:26:34.815321    7953 out.go:177] * Done! kubectl is now configured to use "addons-20220509082454-6723" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-05-09 08:25:11 UTC, end at Mon 2022-05-09 08:31:45 UTC. --
	May 09 08:27:00 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:00.780377326Z" level=info msg="Container failed to exit within 1s of signal 15 - using the force" container=19f6c457b9b0115837c8d9be4fa0b6d57a729b0de79f62fa9e3aa97c7898fd9b
	May 09 08:27:00 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:00.870487825Z" level=info msg="ignoring event" container=19f6c457b9b0115837c8d9be4fa0b6d57a729b0de79f62fa9e3aa97c7898fd9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:00 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:00.930012097Z" level=info msg="ignoring event" container=f0c955958fc4c8f631892e1164c1a20f80785dddc7daab7390804fc3c1184089 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:09 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:09.751608689Z" level=info msg="ignoring event" container=b49681b6cf0a205f0dd7359eb0cc02f341adb38d834ca7f4e2742faae4572e9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:09 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:09.809863866Z" level=info msg="ignoring event" container=853d84fa4791643812f9dcbed1647177aaa387dd1b40bf335fdb09a542db2c1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.176018180Z" level=info msg="ignoring event" container=0ac26640659d780ab449bd2334568d3d906c2550cf473b22fb80b3645f205b18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.476842735Z" level=info msg="ignoring event" container=3af14f8207794dbff7e8cfe7e711744c9e2862946784a186bdac4b8d18c8b1e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.580193294Z" level=info msg="ignoring event" container=4079e4f65e803972ec38241aec79b6c138739dfe48058622f449c37364bfb3e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.580242340Z" level=info msg="ignoring event" container=6f6681fad9a68625012129439036998011f3a1660cacd0af216af25ac9b2c1ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.580469580Z" level=info msg="ignoring event" container=e461d693a1a2dad86f27c412f917f82eca31381068ddc5699147e2154cbace1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.582073890Z" level=info msg="ignoring event" container=f6fc5a8ddb17f664612411f0dbf861851b5c67166c54b77f07012a3a3cb252ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.662049217Z" level=info msg="ignoring event" container=f9afb7a04d25926ed1f12dd6ba2997f3fa8680ded3eb479b150e434139cf51d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.671781417Z" level=info msg="ignoring event" container=b1e7170bd3e0597503359727edeaa3453ee1128bb3d0f75d6d3346f641dc2d90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.675322820Z" level=info msg="ignoring event" container=7aed26f4aff5d2d3210f3b9f92ea05e39d4e096861c3ca09f32d8adbe76d9037 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.762748547Z" level=info msg="ignoring event" container=10fef6303f90b02f5e7150e8b950fa95aafae2ffc753fd5211f2effd324bbd79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.877521535Z" level=info msg="ignoring event" container=14338d0819f824ff82f04dde0793f397f3c53d840513d472c8d63ac3c4a89fae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.886094289Z" level=info msg="ignoring event" container=20fd2d5d29eee9e080660ac8e3f36b3d1f357bfedc100d40c65eec3d0e98b2e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.893568687Z" level=info msg="ignoring event" container=6661a7fa2376ad4137834066f4afed449f5a79e5f709b3481748a8232d032b4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:11 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:11.967200181Z" level=info msg="ignoring event" container=4604afab3e3d2c818076a0d7c79fb0252937c4be3bfdd54ec76ea662727531e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:17 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:17.983022664Z" level=info msg="ignoring event" container=170684d7549141af5651c1f1ecee5851229d082fdddced44fcef2ac66c364a7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:17 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:17.987612373Z" level=info msg="ignoring event" container=006a5b36e9365a0cf1e6488860297077168d91babf0a362e835b876f0ee94f8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:18 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:18.106043791Z" level=info msg="ignoring event" container=70a23ef23094d05bdde93fdd12fd0f712bb559411b239a4d76892b82d1876229 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:27:18 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:27:18.110624372Z" level=info msg="ignoring event" container=7052a4ccc49826cf4aebbd7e98831658a1b5426172673fd089d99ffaa87e00ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:31:44 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:31:44.776915488Z" level=info msg="ignoring event" container=33cc200a076b4d8de3454a783afa49504e228728635746147ee971e9ae8cff08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 09 08:31:44 addons-20220509082454-6723 dockerd[465]: time="2022-05-09T08:31:44.880321944Z" level=info msg="ignoring event" container=aa8b1cfd395b3cffa918ed9570e9b7e045863c09f6299252a4679161a99fc499 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID
	f861911212cf0       gcr.io/google-samples/hello-app@sha256:88b205d7995332e10e836514fbfd59ecaf8976fc15060cd66e85cdcebe7fb356        4 minutes ago       Running             hello-world-app           0                   e4d0b44f3fdb7
	7e1eae536132c       nginx@sha256:5a0df7fb7c8c03e4158ae9974bfbd6a15da2bdfdeded4fb694367ec812325d31                                  4 minutes ago       Running             nginx                     0                   da17c3c1c0837
	386693e96fe63       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:26c7b2454f1c946d7c80839251d939606620f37c2f275be2796c1ffd96c438f6   5 minutes ago       Running             gcp-auth                  0                   2338676026fb0
	df61303e3702a       6e38f40d628db                                                                                                  5 minutes ago       Running             storage-provisioner       0                   1afb9d73160b8
	8f8557d20e22e       a4ca41631cc7a                                                                                                  6 minutes ago       Running             coredns                   0                   5c98e7302a0f6
	a742f842d6bf5       77b49675beae1                                                                                                  6 minutes ago       Running             kube-proxy                0                   389cb08f0e2c5
	1529212668f56       529072250ccc6                                                                                                  6 minutes ago       Running             kube-apiserver            0                   dbf0d69422ac4
	f36f49cc6c9ba       88784fb4ac2f6                                                                                                  6 minutes ago       Running             kube-controller-manager   0                   ef67ca56c2920
	3aa8ad5c52e2a       aebe758cef4cd                                                                                                  6 minutes ago       Running             etcd                      0                   b860b3a9f3573
	318787268d544       e3ed7dee73e93                                                                                                  6 minutes ago       Running             kube-scheduler            0                   512838da0ec04
	
	* 
	* ==> coredns [8f8557d20e22] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20220509082454-6723
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20220509082454-6723
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3bac68e23e7013f03af5baca398608c8c8001fab
	                    minikube.k8s.io/name=addons-20220509082454-6723
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_09T08_25_30_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20220509082454-6723
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 May 2022 08:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20220509082454-6723
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 May 2022 08:31:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 May 2022 08:27:13 +0000   Mon, 09 May 2022 08:25:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 May 2022 08:27:13 +0000   Mon, 09 May 2022 08:25:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 May 2022 08:27:13 +0000   Mon, 09 May 2022 08:25:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 May 2022 08:27:13 +0000   Mon, 09 May 2022 08:25:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20220509082454-6723
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                5a67426b-3cc4-4ada-b8e4-3fe90a7fc207
	  Boot ID:                    3b893569-87e6-48ef-ba82-8bed0b9b0671
	  Kernel Version:             5.13.0-1024-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.24.0
	  Kube-Proxy Version:         v1.24.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-794ff86bf-p5l4p                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  default                     nginx                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  gcp-auth                    gcp-auth-7865585679-zlvx9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  kube-system                 coredns-6d4b75cb6d-qcc2z                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     6m3s
	  kube-system                 etcd-addons-20220509082454-6723                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-apiserver-addons-20220509082454-6723             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-controller-manager-addons-20220509082454-6723    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-proxy-8ch85                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-scheduler-addons-20220509082454-6723             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 storage-provisioner                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m1s                   kube-proxy       
	  Normal  Starting                 6m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m26s (x3 over 6m26s)  kubelet          Node addons-20220509082454-6723 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x3 over 6m26s)  kubelet          Node addons-20220509082454-6723 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x3 over 6m26s)  kubelet          Node addons-20220509082454-6723 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m16s                  kubelet          Node addons-20220509082454-6723 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    6m16s                  kubelet          Node addons-20220509082454-6723 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s                  kubelet          Node addons-20220509082454-6723 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             6m15s                  kubelet          Node addons-20220509082454-6723 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m5s                   kubelet          Node addons-20220509082454-6723 status is now: NodeReady
	  Normal  RegisteredNode           6m4s                   node-controller  Node addons-20220509082454-6723 event: Registered Node addons-20220509082454-6723 in Controller
	
	* 
	* ==> dmesg <==
	* [May 9 08:17]  #2
	[  +0.001790]  #3
	[  +0.002285]  #4
	[  +0.001345] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.002170] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002206]  #5
	[  +0.002136]  #6
	[  +0.001398]  #7
	[  +0.053456] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.541684] i8042: Warning: Keylock active
	[  +0.011880] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001120] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000843] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000947] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000823] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000916] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000913] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000944] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.060268] kauditd_printk_skb: 32 callbacks suppressed
	
	* 
	* ==> etcd [3aa8ad5c52e2] <==
	* {"level":"info","ts":"2022-05-09T08:25:25.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-20220509082454-6723 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-09T08:25:25.390Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-09T08:25:25.391Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-09T08:25:25.391Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-09T08:25:25.391Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-09T08:25:25.392Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-05-09T08:25:25.392Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-09T08:26:00.183Z","caller":"traceutil/trace.go:171","msg":"trace[553055693] transaction","detail":"{read_only:false; response_revision:758; number_of_response:1; }","duration":"121.118832ms","start":"2022-05-09T08:26:00.062Z","end":"2022-05-09T08:26:00.183Z","steps":["trace[553055693] 'process raft request'  (duration: 90.545209ms)","trace[553055693] 'compare'  (duration: 30.454148ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-09T08:26:03.266Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"175.496928ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:12801"}
	{"level":"info","ts":"2022-05-09T08:26:03.266Z","caller":"traceutil/trace.go:171","msg":"trace[330179958] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:774; }","duration":"175.625199ms","start":"2022-05-09T08:26:03.091Z","end":"2022-05-09T08:26:03.266Z","steps":["trace[330179958] 'range keys from in-memory index tree'  (duration: 175.345985ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-09T08:26:03.267Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"174.784982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:85313"}
	{"level":"info","ts":"2022-05-09T08:26:03.267Z","caller":"traceutil/trace.go:171","msg":"trace[1647984900] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:774; }","duration":"174.838921ms","start":"2022-05-09T08:26:03.092Z","end":"2022-05-09T08:26:03.267Z","steps":["trace[1647984900] 'range keys from in-memory index tree'  (duration: 174.471631ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-09T08:26:24.760Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"169.777492ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13395"}
	{"level":"info","ts":"2022-05-09T08:26:24.760Z","caller":"traceutil/trace.go:171","msg":"trace[1063404224] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:907; }","duration":"169.899031ms","start":"2022-05-09T08:26:24.590Z","end":"2022-05-09T08:26:24.760Z","steps":["trace[1063404224] 'agreement among raft nodes before linearized reading'  (duration: 77.709508ms)","trace[1063404224] 'range keys from in-memory index tree'  (duration: 91.995741ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  08:31:46 up 14 min,  0 users,  load average: 0.07, 0.49, 0.32
	Linux addons-20220509082454-6723 5.13.0-1024-gcp #29~20.04.1-Ubuntu SMP Thu Apr 14 23:15:00 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1529212668f5] <==
	* W0509 08:27:18.868408       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	W0509 08:27:18.875312       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	E0509 08:28:28.947259       1 controller.go:113] loading OpenAPI spec for "" failed with: APIService  does not exist for update
	I0509 08:28:28.947290       1 controller.go:126] OpenAPI AggregationController: action for item : Rate Limited Requeue.
	W0509 08:28:49.075939       1 handler_proxy.go:102] no RequestInfo found in the context
	E0509 08:28:49.075982       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0509 08:28:49.075989       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0509 08:28:49.077104       1 handler_proxy.go:102] no RequestInfo found in the context
	E0509 08:28:49.077183       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0509 08:28:49.077224       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0509 08:30:28.995629       1 handler_proxy.go:102] no RequestInfo found in the context
	W0509 08:30:28.995659       1 handler_proxy.go:102] no RequestInfo found in the context
	E0509 08:30:28.995674       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0509 08:30:28.995682       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0509 08:30:28.995696       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0509 08:30:28.996802       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0509 08:31:28.996171       1 handler_proxy.go:102] no RequestInfo found in the context
	E0509 08:31:28.996217       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0509 08:31:28.996227       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0509 08:31:28.997298       1 handler_proxy.go:102] no RequestInfo found in the context
	E0509 08:31:28.997359       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0509 08:31:28.997378       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f36f49cc6c9b] <==
	* W0509 08:29:43.781887       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0509 08:30:13.372181       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0509 08:30:13.795115       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	W0509 08:30:17.767978       1 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0509 08:30:17.768011       1 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0509 08:30:21.688035       1 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0509 08:30:21.688064       1 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0509 08:30:25.466621       1 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0509 08:30:25.466649       1 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0509 08:30:29.244282       1 namespace_controller.go:162] deletion of namespace ingress-nginx failed: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0509 08:30:43.384389       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0509 08:30:43.809566       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0509 08:30:47.937077       1 namespace_controller.go:162] deletion of namespace ingress-nginx failed: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0509 08:30:57.711951       1 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0509 08:30:57.711980       1 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0509 08:30:59.461720       1 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0509 08:30:59.461752       1 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0509 08:31:13.393006       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0509 08:31:13.823039       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	W0509 08:31:21.466568       1 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0509 08:31:21.466600       1 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0509 08:31:31.044077       1 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0509 08:31:31.044106       1 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0509 08:31:43.404158       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0509 08:31:43.836862       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [a742f842d6bf] <==
	* I0509 08:25:44.367372       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0509 08:25:44.367449       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0509 08:25:44.367482       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0509 08:25:44.390715       1 server_others.go:206] "Using iptables Proxier"
	I0509 08:25:44.390752       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0509 08:25:44.390762       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0509 08:25:44.390780       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0509 08:25:44.390804       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0509 08:25:44.391037       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0509 08:25:44.391262       1 server.go:661] "Version info" version="v1.24.0"
	I0509 08:25:44.391286       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0509 08:25:44.391760       1 config.go:317] "Starting service config controller"
	I0509 08:25:44.392082       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0509 08:25:44.392125       1 config.go:226] "Starting endpoint slice config controller"
	I0509 08:25:44.392132       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0509 08:25:44.392302       1 config.go:444] "Starting node config controller"
	I0509 08:25:44.392317       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0509 08:25:44.492699       1 shared_informer.go:262] Caches are synced for node config
	I0509 08:25:44.492729       1 shared_informer.go:262] Caches are synced for service config
	I0509 08:25:44.492732       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [318787268d54] <==
	* W0509 08:25:28.078952       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0509 08:25:28.078986       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0509 08:25:28.078987       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0509 08:25:28.079013       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0509 08:25:28.079048       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0509 08:25:28.079081       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0509 08:25:28.079177       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0509 08:25:28.079208       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0509 08:25:28.079246       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0509 08:25:28.079287       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0509 08:25:28.079341       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0509 08:25:28.079356       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0509 08:25:28.997258       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0509 08:25:28.997288       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0509 08:25:29.004565       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0509 08:25:29.004625       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0509 08:25:29.021525       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0509 08:25:29.021555       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0509 08:25:29.090400       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0509 08:25:29.090435       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0509 08:25:29.093308       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0509 08:25:29.093340       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0509 08:25:29.227969       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0509 08:25:29.228000       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0509 08:25:29.673933       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-05-09 08:25:11 UTC, end at Mon 2022-05-09 08:31:46 UTC. --
	May 09 08:27:18 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:18.777737    1927 scope.go:110] "RemoveContainer" containerID="006a5b36e9365a0cf1e6488860297077168d91babf0a362e835b876f0ee94f8f"
	May 09 08:27:18 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:18.794107    1927 scope.go:110] "RemoveContainer" containerID="006a5b36e9365a0cf1e6488860297077168d91babf0a362e835b876f0ee94f8f"
	May 09 08:27:18 addons-20220509082454-6723 kubelet[1927]: E0509 08:27:18.795035    1927 remote_runtime.go:578] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 006a5b36e9365a0cf1e6488860297077168d91babf0a362e835b876f0ee94f8f" containerID="006a5b36e9365a0cf1e6488860297077168d91babf0a362e835b876f0ee94f8f"
	May 09 08:27:18 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:18.795093    1927 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:006a5b36e9365a0cf1e6488860297077168d91babf0a362e835b876f0ee94f8f} err="failed to get container status \"006a5b36e9365a0cf1e6488860297077168d91babf0a362e835b876f0ee94f8f\": rpc error: code = Unknown desc = Error: No such container: 006a5b36e9365a0cf1e6488860297077168d91babf0a362e835b876f0ee94f8f"
	May 09 08:27:19 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:19.095774    1927 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=87d6ef26-dbec-4a75-a8dd-0617b84e2b81 path="/var/lib/kubelet/pods/87d6ef26-dbec-4a75-a8dd-0617b84e2b81/volumes"
	May 09 08:27:19 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:19.096221    1927 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f73e5506-b698-4743-96a1-07749a6622c6 path="/var/lib/kubelet/pods/f73e5506-b698-4743-96a1-07749a6622c6/volumes"
	May 09 08:27:31 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:31.086867    1927 scope.go:110] "RemoveContainer" containerID="68c6765acb1f69c79a32e1512ac51487577580b39bf34f0af7e54b109a15dd04"
	May 09 08:27:31 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:31.099861    1927 scope.go:110] "RemoveContainer" containerID="4ef7c321d13c129e7e4b8760f4820f074a468a2071939531cf9753d6648d755b"
	May 09 08:27:31 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:31.113185    1927 scope.go:110] "RemoveContainer" containerID="fd2dec8d2805ba7af46f7f1fe9a10246e72eda341eb17092753cbbe0f4107e30"
	May 09 08:27:31 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:31.126135    1927 scope.go:110] "RemoveContainer" containerID="f24f5a39b853bba85358d78af848510d2f8f41be7e3374b2866de758939b98db"
	May 09 08:27:31 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:31.139861    1927 scope.go:110] "RemoveContainer" containerID="7fe0d152461abd32a219b8915d18b660f90fdb4e7699a169338f2ad8aee2fde8"
	May 09 08:27:31 addons-20220509082454-6723 kubelet[1927]: I0509 08:27:31.154050    1927 scope.go:110] "RemoveContainer" containerID="2ff600bdb24f517db23519bd6d4c94372f024a7645d74eee77f2fe3bf444a513"
	May 09 08:27:34 addons-20220509082454-6723 kubelet[1927]: E0509 08:27:34.310900    1927 cri_stats_provider.go:669] "Unable to fetch container log stats" err="open /var/log/pods/kube-system_snapshot-controller-557749dccd-pp4pp_f73e5506-b698-4743-96a1-07749a6622c6/volume-snapshot-controller: no such file or directory" containerName="volume-snapshot-controller"
	May 09 08:27:34 addons-20220509082454-6723 kubelet[1927]: E0509 08:27:34.310955    1927 cri_stats_provider.go:669] "Unable to fetch container log stats" err="open /var/log/pods/kube-system_snapshot-controller-557749dccd-zs6dp_87d6ef26-dbec-4a75-a8dd-0617b84e2b81/volume-snapshot-controller: no such file or directory" containerName="volume-snapshot-controller"
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: I0509 08:31:45.162153    1927 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnct4\" (UniqueName: \"kubernetes.io/projected/4e835545-32d0-4e32-abd8-7358d6da7010-kube-api-access-bnct4\") pod \"4e835545-32d0-4e32-abd8-7358d6da7010\" (UID: \"4e835545-32d0-4e32-abd8-7358d6da7010\") "
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: I0509 08:31:45.162226    1927 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4e835545-32d0-4e32-abd8-7358d6da7010-tmp-dir\") pod \"4e835545-32d0-4e32-abd8-7358d6da7010\" (UID: \"4e835545-32d0-4e32-abd8-7358d6da7010\") "
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: W0509 08:31:45.162438    1927 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/4e835545-32d0-4e32-abd8-7358d6da7010/volumes/kubernetes.io~empty-dir/tmp-dir: clearQuota called, but quotas disabled
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: I0509 08:31:45.162553    1927 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e835545-32d0-4e32-abd8-7358d6da7010-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "4e835545-32d0-4e32-abd8-7358d6da7010" (UID: "4e835545-32d0-4e32-abd8-7358d6da7010"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: I0509 08:31:45.164436    1927 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e835545-32d0-4e32-abd8-7358d6da7010-kube-api-access-bnct4" (OuterVolumeSpecName: "kube-api-access-bnct4") pod "4e835545-32d0-4e32-abd8-7358d6da7010" (UID: "4e835545-32d0-4e32-abd8-7358d6da7010"). InnerVolumeSpecName "kube-api-access-bnct4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: I0509 08:31:45.263431    1927 reconciler.go:312] "Volume detached for volume \"kube-api-access-bnct4\" (UniqueName: \"kubernetes.io/projected/4e835545-32d0-4e32-abd8-7358d6da7010-kube-api-access-bnct4\") on node \"addons-20220509082454-6723\" DevicePath \"\""
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: I0509 08:31:45.263472    1927 reconciler.go:312] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4e835545-32d0-4e32-abd8-7358d6da7010-tmp-dir\") on node \"addons-20220509082454-6723\" DevicePath \"\""
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: I0509 08:31:45.781058    1927 scope.go:110] "RemoveContainer" containerID="33cc200a076b4d8de3454a783afa49504e228728635746147ee971e9ae8cff08"
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: I0509 08:31:45.798979    1927 scope.go:110] "RemoveContainer" containerID="33cc200a076b4d8de3454a783afa49504e228728635746147ee971e9ae8cff08"
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: E0509 08:31:45.800144    1927 remote_runtime.go:578] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 33cc200a076b4d8de3454a783afa49504e228728635746147ee971e9ae8cff08" containerID="33cc200a076b4d8de3454a783afa49504e228728635746147ee971e9ae8cff08"
	May 09 08:31:45 addons-20220509082454-6723 kubelet[1927]: I0509 08:31:45.800190    1927 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:33cc200a076b4d8de3454a783afa49504e228728635746147ee971e9ae8cff08} err="failed to get container status \"33cc200a076b4d8de3454a783afa49504e228728635746147ee971e9ae8cff08\": rpc error: code = Unknown desc = Error: No such container: 33cc200a076b4d8de3454a783afa49504e228728635746147ee971e9ae8cff08"
	
	* 
	* ==> storage-provisioner [df61303e3702] <==
	* I0509 08:25:50.765855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0509 08:25:50.775377       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0509 08:25:50.775456       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0509 08:25:50.790388       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0509 08:25:50.790570       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20220509082454-6723_49abdbcf-c01c-4901-b0c5-8aa25fd89b16!
	I0509 08:25:50.791710       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c96e7e1-d24c-4205-9d77-efe6633a4ff8", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20220509082454-6723_49abdbcf-c01c-4901-b0c5-8aa25fd89b16 became leader
	I0509 08:25:50.960384       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20220509082454-6723_49abdbcf-c01c-4901-b0c5-8aa25fd89b16!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20220509082454-6723 -n addons-20220509082454-6723
helpers_test.go:261: (dbg) Run:  kubectl --context addons-20220509082454-6723 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context addons-20220509082454-6723 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context addons-20220509082454-6723 describe pod : exit status 1 (60.56935ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context addons-20220509082454-6723 describe pod : exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (312.23s)

                                                
                                    
x
+
TestForceSystemdFlag (73.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220509085506-6723 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p force-systemd-flag-20220509085506-6723 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 90 (1m10.123346206s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20220509085506-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14070
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node force-systemd-flag-20220509085506-6723 in cluster force-systemd-flag-20220509085506-6723
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:55:06.457658  186275 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:55:06.457828  186275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:55:06.457839  186275 out.go:309] Setting ErrFile to fd 2...
	I0509 08:55:06.457847  186275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:55:06.457986  186275 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:55:06.458278  186275 out.go:303] Setting JSON to false
	I0509 08:55:06.460850  186275 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2260,"bootTime":1652084246,"procs":984,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 08:55:06.460958  186275 start.go:125] virtualization: kvm guest
	I0509 08:55:06.463767  186275 out.go:177] * [force-systemd-flag-20220509085506-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0509 08:55:06.465996  186275 out.go:177]   - MINIKUBE_LOCATION=14070
	I0509 08:55:06.466013  186275 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.checksum
	I0509 08:55:06.465954  186275 notify.go:193] Checking for updates...
	I0509 08:55:06.469235  186275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 08:55:06.471315  186275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:55:06.473020  186275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 08:55:06.474646  186275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0509 08:55:06.476818  186275 config.go:178] Loaded profile config "kubernetes-upgrade-20220509085441-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0509 08:55:06.476959  186275 config.go:178] Loaded profile config "missing-upgrade-20220509085336-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0509 08:55:06.477059  186275 config.go:178] Loaded profile config "running-upgrade-20220509085501-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0509 08:55:06.477125  186275 driver.go:346] Setting default libvirt URI to qemu:///system
	I0509 08:55:06.522409  186275 docker.go:137] docker version: linux-20.10.15
	I0509 08:55:06.522521  186275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:55:06.641477  186275 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-09 08:55:06.553916854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:06.641613  186275 docker.go:254] overlay module found
	I0509 08:55:06.643893  186275 out.go:177] * Using the docker driver based on user configuration
	I0509 08:55:06.645295  186275 start.go:284] selected driver: docker
	I0509 08:55:06.645316  186275 start.go:801] validating driver "docker" against <nil>
	I0509 08:55:06.645343  186275 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0509 08:55:06.645417  186275 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0509 08:55:06.645441  186275 out.go:239] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0509 08:55:06.646864  186275 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0509 08:55:06.648701  186275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:55:06.777855  186275 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:42 SystemTime:2022-05-09 08:55:06.679882341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:06.777994  186275 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0509 08:55:06.778168  186275 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0509 08:55:06.780470  186275 out.go:177] * Using Docker driver with the root privilege
	I0509 08:55:06.781841  186275 cni.go:95] Creating CNI manager for ""
	I0509 08:55:06.781878  186275 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0509 08:55:06.781891  186275 start_flags.go:306] config:
	{Name:force-systemd-flag-20220509085506-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:force-systemd-flag-20220509085506-6723 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0509 08:55:06.783957  186275 out.go:177] * Starting control plane node force-systemd-flag-20220509085506-6723 in cluster force-systemd-flag-20220509085506-6723
	I0509 08:55:06.785393  186275 cache.go:120] Beginning downloading kic base image for docker with docker
	I0509 08:55:06.786731  186275 out.go:177] * Pulling base image ...
	I0509 08:55:06.788079  186275 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 08:55:06.788130  186275 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4
	I0509 08:55:06.788150  186275 cache.go:57] Caching tarball of preloaded images
	I0509 08:55:06.788128  186275 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0509 08:55:06.788426  186275 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0509 08:55:06.788448  186275 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.0 on docker
	I0509 08:55:06.788574  186275 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/force-systemd-flag-20220509085506-6723/config.json ...
	I0509 08:55:06.788599  186275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/force-systemd-flag-20220509085506-6723/config.json: {Name:mk9aed9988c0e5019e3f23652e4ea57b3c44414a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:55:06.838503  186275 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0509 08:55:06.838537  186275 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0509 08:55:06.838555  186275 cache.go:206] Successfully downloaded all kic artifacts
	I0509 08:55:06.838592  186275 start.go:352] acquiring machines lock for force-systemd-flag-20220509085506-6723: {Name:mk0f86bbd9d5220c32089d0083184e884fb202b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:06.838725  186275 start.go:356] acquired machines lock for "force-systemd-flag-20220509085506-6723" in 110.48µs
	I0509 08:55:06.838763  186275 start.go:91] Provisioning new machine with config: &{Name:force-systemd-flag-20220509085506-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:force-systemd-flag-20220509085506-6723 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:
v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0509 08:55:06.838862  186275 start.go:131] createHost starting for "" (driver="docker")
	I0509 08:55:06.841380  186275 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0509 08:55:06.841739  186275 start.go:165] libmachine.API.Create for "force-systemd-flag-20220509085506-6723" (driver="docker")
	I0509 08:55:06.841790  186275 client.go:168] LocalClient.Create starting
	I0509 08:55:06.841864  186275 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem
	I0509 08:55:06.841904  186275 main.go:134] libmachine: Decoding PEM data...
	I0509 08:55:06.841931  186275 main.go:134] libmachine: Parsing certificate...
	I0509 08:55:06.842022  186275 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem
	I0509 08:55:06.842052  186275 main.go:134] libmachine: Decoding PEM data...
	I0509 08:55:06.842079  186275 main.go:134] libmachine: Parsing certificate...
	I0509 08:55:06.842513  186275 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220509085506-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0509 08:55:06.882692  186275 cli_runner.go:211] docker network inspect force-systemd-flag-20220509085506-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0509 08:55:06.882767  186275 network_create.go:272] running [docker network inspect force-systemd-flag-20220509085506-6723] to gather additional debugging logs...
	I0509 08:55:06.882791  186275 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220509085506-6723
	W0509 08:55:06.924457  186275 cli_runner.go:211] docker network inspect force-systemd-flag-20220509085506-6723 returned with exit code 1
	I0509 08:55:06.924512  186275 network_create.go:275] error running [docker network inspect force-systemd-flag-20220509085506-6723]: docker network inspect force-systemd-flag-20220509085506-6723: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20220509085506-6723
	I0509 08:55:06.924545  186275 network_create.go:277] output of [docker network inspect force-systemd-flag-20220509085506-6723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20220509085506-6723
	
	** /stderr **
	I0509 08:55:06.924711  186275 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0509 08:55:06.965591  186275 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010238] misses:0}
	I0509 08:55:06.965686  186275 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0509 08:55:06.965720  186275 network_create.go:115] attempt to create docker network force-systemd-flag-20220509085506-6723 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0509 08:55:06.965819  186275 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20220509085506-6723
	I0509 08:55:07.118711  186275 network_create.go:99] docker network force-systemd-flag-20220509085506-6723 192.168.49.0/24 created
	I0509 08:55:07.118756  186275 kic.go:106] calculated static IP "192.168.49.2" for the "force-systemd-flag-20220509085506-6723" container
	I0509 08:55:07.118830  186275 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0509 08:55:07.166615  186275 cli_runner.go:164] Run: docker volume create force-systemd-flag-20220509085506-6723 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220509085506-6723 --label created_by.minikube.sigs.k8s.io=true
	I0509 08:55:07.212474  186275 oci.go:103] Successfully created a docker volume force-systemd-flag-20220509085506-6723
	I0509 08:55:07.212567  186275 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-20220509085506-6723-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20220509085506-6723 --entrypoint /usr/bin/test -v force-systemd-flag-20220509085506-6723:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0509 08:55:08.116349  186275 oci.go:107] Successfully prepared a docker volume force-systemd-flag-20220509085506-6723
	I0509 08:55:08.116399  186275 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 08:55:08.116436  186275 kic.go:179] Starting extracting preloaded images to volume ...
	I0509 08:55:08.116507  186275 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20220509085506-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0509 08:55:13.330878  186275 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20220509085506-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (5.214300297s)
	I0509 08:55:13.330942  186275 kic.go:188] duration metric: took 5.214503 seconds to extract preloaded images to volume
	W0509 08:55:13.330991  186275 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0509 08:55:13.331007  186275 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0509 08:55:13.331064  186275 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0509 08:55:13.476744  186275 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-20220509085506-6723 --name force-systemd-flag-20220509085506-6723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20220509085506-6723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-20220509085506-6723 --network force-systemd-flag-20220509085506-6723 --ip 192.168.49.2 --volume force-systemd-flag-20220509085506-6723:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0509 08:55:14.086928  186275 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220509085506-6723 --format={{.State.Running}}
	I0509 08:55:14.141734  186275 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220509085506-6723 --format={{.State.Status}}
	I0509 08:55:14.224812  186275 cli_runner.go:164] Run: docker exec force-systemd-flag-20220509085506-6723 stat /var/lib/dpkg/alternatives/iptables
	I0509 08:55:14.321744  186275 oci.go:279] the created container "force-systemd-flag-20220509085506-6723" has a running status.
	I0509 08:55:14.321796  186275 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-flag-20220509085506-6723/id_rsa...
	I0509 08:55:14.568900  186275 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-flag-20220509085506-6723/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0509 08:55:14.568974  186275 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-flag-20220509085506-6723/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0509 08:55:14.713449  186275 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220509085506-6723 --format={{.State.Status}}
	I0509 08:55:14.764761  186275 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0509 08:55:14.764789  186275 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-20220509085506-6723 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0509 08:55:14.885682  186275 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220509085506-6723 --format={{.State.Status}}
	I0509 08:55:14.935495  186275 machine.go:88] provisioning docker machine ...
	I0509 08:55:14.935552  186275 ubuntu.go:169] provisioning hostname "force-systemd-flag-20220509085506-6723"
	I0509 08:55:14.935616  186275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220509085506-6723
	I0509 08:55:14.980400  186275 main.go:134] libmachine: Using SSH client type: native
	I0509 08:55:14.980731  186275 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49329 <nil> <nil>}
	I0509 08:55:14.980766  186275 main.go:134] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-20220509085506-6723 && echo "force-systemd-flag-20220509085506-6723" | sudo tee /etc/hostname
	I0509 08:55:15.119785  186275 main.go:134] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-20220509085506-6723
	
	I0509 08:55:15.119867  186275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220509085506-6723
	I0509 08:55:15.159924  186275 main.go:134] libmachine: Using SSH client type: native
	I0509 08:55:15.160116  186275 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49329 <nil> <nil>}
	I0509 08:55:15.160146  186275 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-20220509085506-6723' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-20220509085506-6723/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-20220509085506-6723' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0509 08:55:15.300956  186275 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0509 08:55:15.300991  186275 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube}
	I0509 08:55:15.301018  186275 ubuntu.go:177] setting up certificates
	I0509 08:55:15.301029  186275 provision.go:83] configureAuth start
	I0509 08:55:15.301072  186275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20220509085506-6723
	I0509 08:55:15.352977  186275 provision.go:138] copyHostCerts
	I0509 08:55:15.353020  186275 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem
	I0509 08:55:15.353043  186275 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem, removing ...
	I0509 08:55:15.353055  186275 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem
	I0509 08:55:15.353125  186275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem (1078 bytes)
	I0509 08:55:15.353186  186275 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem
	I0509 08:55:15.353208  186275 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem, removing ...
	I0509 08:55:15.353220  186275 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem
	I0509 08:55:15.353257  186275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem (1123 bytes)
	I0509 08:55:15.353299  186275 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem
	I0509 08:55:15.353314  186275 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem, removing ...
	I0509 08:55:15.353321  186275 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem
	I0509 08:55:15.353340  186275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem (1679 bytes)
	I0509 08:55:15.353382  186275 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-20220509085506-6723 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-20220509085506-6723]
	I0509 08:55:15.517949  186275 provision.go:172] copyRemoteCerts
	I0509 08:55:15.518010  186275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0509 08:55:15.518053  186275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220509085506-6723
	I0509 08:55:15.569619  186275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49329 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-flag-20220509085506-6723/id_rsa Username:docker}
	I0509 08:55:15.681827  186275 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0509 08:55:15.681912  186275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0509 08:55:15.707674  186275 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0509 08:55:15.707735  186275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0509 08:55:15.737606  186275 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0509 08:55:15.737677  186275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0509 08:55:15.762082  186275 provision.go:86] duration metric: configureAuth took 461.04055ms
	I0509 08:55:15.762110  186275 ubuntu.go:193] setting minikube options for container-runtime
	I0509 08:55:15.762286  186275 config.go:178] Loaded profile config "force-systemd-flag-20220509085506-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:55:15.762345  186275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220509085506-6723
	I0509 08:55:15.812781  186275 main.go:134] libmachine: Using SSH client type: native
	I0509 08:55:15.812978  186275 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49329 <nil> <nil>}
	I0509 08:55:15.813008  186275 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0509 08:55:15.966477  186275 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0509 08:55:15.966509  186275 ubuntu.go:71] root file system type: overlay
	I0509 08:55:15.966734  186275 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0509 08:55:15.966813  186275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220509085506-6723
	I0509 08:55:16.020918  186275 main.go:134] libmachine: Using SSH client type: native
	I0509 08:55:16.021126  186275 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49329 <nil> <nil>}
	I0509 08:55:16.021228  186275 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0509 08:55:16.167496  186275 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0509 08:55:16.167591  186275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220509085506-6723
	I0509 08:55:16.215652  186275 main.go:134] libmachine: Using SSH client type: native
	I0509 08:55:16.215844  186275 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49329 <nil> <nil>}
	I0509 08:55:16.215878  186275 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0509 08:55:19.293895  186275 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-09 08:55:16.164254667 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0509 08:55:19.293931  186275 machine.go:91] provisioned docker machine in 4.358406045s
	I0509 08:55:19.293943  186275 client.go:171] LocalClient.Create took 12.452142175s
	I0509 08:55:19.293961  186275 start.go:173] duration metric: libmachine.API.Create for "force-systemd-flag-20220509085506-6723" took 12.452224315s
	I0509 08:55:19.293978  186275 start.go:306] post-start starting for "force-systemd-flag-20220509085506-6723" (driver="docker")
	I0509 08:55:19.293986  186275 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0509 08:55:19.294044  186275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0509 08:55:19.294096  186275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220509085506-6723
	I0509 08:55:19.344535  186275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49329 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-flag-20220509085506-6723/id_rsa Username:docker}
	I0509 08:55:19.437964  186275 ssh_runner.go:195] Run: cat /etc/os-release
	I0509 08:55:19.441284  186275 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0509 08:55:19.441316  186275 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0509 08:55:19.441329  186275 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0509 08:55:19.441337  186275 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0509 08:55:19.441349  186275 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/addons for local assets ...
	I0509 08:55:19.441412  186275 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files for local assets ...
	I0509 08:55:19.441502  186275 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem -> 67232.pem in /etc/ssl/certs
	I0509 08:55:19.441515  186275 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem -> /etc/ssl/certs/67232.pem
	I0509 08:55:19.441619  186275 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0509 08:55:19.449445  186275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem --> /etc/ssl/certs/67232.pem (1708 bytes)
	I0509 08:55:19.468900  186275 start.go:309] post-start completed in 174.907279ms
	I0509 08:55:19.469269  186275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20220509085506-6723
	I0509 08:55:19.504166  186275 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/force-systemd-flag-20220509085506-6723/config.json ...
	I0509 08:55:19.504490  186275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 08:55:19.504543  186275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220509085506-6723
	I0509 08:55:19.542630  186275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49329 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-flag-20220509085506-6723/id_rsa Username:docker}
	I0509 08:55:19.629858  186275 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0509 08:55:19.634631  186275 start.go:134] duration metric: createHost completed in 12.795752948s
	I0509 08:55:19.634679  186275 start.go:81] releasing machines lock for "force-systemd-flag-20220509085506-6723", held for 12.795931664s
	I0509 08:55:19.634779  186275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20220509085506-6723
	I0509 08:55:19.672020  186275 ssh_runner.go:195] Run: systemctl --version
	I0509 08:55:19.672079  186275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220509085506-6723
	I0509 08:55:19.672081  186275 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0509 08:55:19.672139  186275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220509085506-6723
	I0509 08:55:19.712645  186275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49329 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-flag-20220509085506-6723/id_rsa Username:docker}
	I0509 08:55:19.713066  186275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49329 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-flag-20220509085506-6723/id_rsa Username:docker}
	I0509 08:55:19.820313  186275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0509 08:55:19.831294  186275 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0509 08:55:19.842177  186275 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0509 08:55:19.842249  186275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0509 08:55:19.854707  186275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0509 08:55:19.869905  186275 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0509 08:55:19.959248  186275 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0509 08:55:20.048233  186275 docker.go:510] Forcing docker to use systemd as cgroup manager...
	I0509 08:55:20.048266  186275 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes)
	I0509 08:55:20.063726  186275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0509 08:55:20.147830  186275 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0509 08:55:20.749199  186275 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0509 08:55:20.749275  186275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:20.755395  186275 retry.go:31] will retry after 1.104660288s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:55:21.860210  186275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:21.863895  186275 retry.go:31] will retry after 2.160763633s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:55:24.025799  186275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:24.029483  186275 retry.go:31] will retry after 2.62026012s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:55:26.650216  186275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:26.654165  186275 retry.go:31] will retry after 3.164785382s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:55:29.820733  186275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:29.824573  186275 retry.go:31] will retry after 4.680977329s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:55:34.505793  186275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:34.509492  186275 retry.go:31] will retry after 9.01243771s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:55:43.522903  186275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:43.527712  186275 retry.go:31] will retry after 6.442959172s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:55:49.972765  186275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:49.976326  186275 retry.go:31] will retry after 11.217246954s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:56:01.196808  186275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:56:01.201324  186275 retry.go:31] will retry after 15.299675834s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:56:16.502128  186275 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:56:16.508537  186275 out.go:177] 
	W0509 08:56:16.510239  186275 out.go:239] X Exiting due to RUNTIME_ENABLE: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	
	W0509 08:56:16.510263  186275 out.go:239] * 
	* 
	W0509 08:56:16.511076  186275 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0509 08:56:16.512488  186275 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-linux-amd64 start -p force-systemd-flag-20220509085506-6723 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 90
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220509085506-6723 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2022-05-09 08:56:17.06529034 +0000 UTC m=+1902.508870957
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-20220509085506-6723
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-20220509085506-6723:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "634cae78b95cf095c727bac389d6ad9415afc177a570f6c5ac3f33406f8ab9bb",
	        "Created": "2022-05-09T08:55:13.536771398Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 188949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-09T08:55:14.07419285Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/634cae78b95cf095c727bac389d6ad9415afc177a570f6c5ac3f33406f8ab9bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/634cae78b95cf095c727bac389d6ad9415afc177a570f6c5ac3f33406f8ab9bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/634cae78b95cf095c727bac389d6ad9415afc177a570f6c5ac3f33406f8ab9bb/hosts",
	        "LogPath": "/var/lib/docker/containers/634cae78b95cf095c727bac389d6ad9415afc177a570f6c5ac3f33406f8ab9bb/634cae78b95cf095c727bac389d6ad9415afc177a570f6c5ac3f33406f8ab9bb-json.log",
	        "Name": "/force-systemd-flag-20220509085506-6723",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-20220509085506-6723:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-20220509085506-6723",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a0a76f03907cc4fdd862105b9e5582d69bb80683fdad91fdcaf84f129aafa90a-init/diff:/var/lib/docker/overlay2/beaaca4c58fe6ff4bdb88567c3d78ab7a23955eafaa5637df03ee2e0482d2aa6/diff:/var/lib/docker/overlay2/7c16b810bbfa3a2abff75078fa37b4cba0b2f101ff43d49beaabc3fd2602b1c9/diff:/var/lib/docker/overlay2/60f04c0e4baa8ad1c02ae5e34e6f505db0d740e2d7dc0833b2ff3b8037c1a9b6/diff:/var/lib/docker/overlay2/a12543300ae4ff803b2f0493a60a04a921312ec5a7b6ed493e66acadf998daef/diff:/var/lib/docker/overlay2/2d68f658a64cd8b7255bce93547db5d1b20b119ef6da8b9ce06614134661f235/diff:/var/lib/docker/overlay2/0f968f210c1565f6e8c4e444c650502c06a120d8767c9fefb7b6b2a09f4af83b/diff:/var/lib/docker/overlay2/987a67893acdccd514357a50db6c11680ae1899ec07a36085367a241ba1f0545/diff:/var/lib/docker/overlay2/d5446d1adc007f4390aa6173ef257b0bf52a9c9cc6533f16dd6c577fde5334b3/diff:/var/lib/docker/overlay2/265dd0cde77d578f6412ad48596c75c3548b5e077059c0fa835e4f22775fab83/diff:/var/lib/docker/overlay2/e6b98b
08dd64e06639e2a321169f84a89e8d21cea02d673819156eeb7a4747c3/diff:/var/lib/docker/overlay2/acf7117613a840e97b7a1101bfdce3a154a335ee0ffcaf74bd7f1b27b00cdaab/diff:/var/lib/docker/overlay2/9e88fa64aa800dac6e01eddc9cfd525a0dcd906c1adf98de066be87f87a6b52c/diff:/var/lib/docker/overlay2/6c539edead197cd449ca81fd34e3c534a6ca25446a52141d0ae4484aaac05482/diff:/var/lib/docker/overlay2/866719c52f41a692b94f42071d314de55b290912f946f4ec5f3785a7a5ae40df/diff:/var/lib/docker/overlay2/1c488a9b3bad141652873f4085ce1a466e1f1e8ccbd086d03a492b45179a6064/diff:/var/lib/docker/overlay2/bce5b942da09326e5a5c8595077544ced08031b1cd9b22ff8bff0a4458540139/diff:/var/lib/docker/overlay2/ffe34c3764a4eeac791f4672d47389ec3f399f716ab6e5531d7c5e587f3ace00/diff:/var/lib/docker/overlay2/a377e0779d03467b09a26898ae35b12f325d62984304817f49c693b2146c08f4/diff:/var/lib/docker/overlay2/6092380f07b29488cf0a30cb486638d86eaa8e00ff356a7985c6ac6f2fae5c1d/diff:/var/lib/docker/overlay2/1bc014cc0cf0a91c61f131c06b0194709f853dec23defe9233cbd9cc40030c28/diff:/var/lib/d
ocker/overlay2/0c81c4db7384c48318a800165af7f811348a8081efdd9dbd912e05e55c9eb4e0/diff:/var/lib/docker/overlay2/72b0c515d90bc71e27b766a9be89e315777a5bbe643d8fc508a9ae12557a58ce/diff:/var/lib/docker/overlay2/a4d193bf8c377d4cf1357b9261d3c54d995f17bda3db5abee3ef5caec001d75e/diff:/var/lib/docker/overlay2/763c6d291074b0842ecdeec1f3842fd6a0af0cb86839c82bc38cec0f40d095ca/diff:/var/lib/docker/overlay2/e4156eb2a94ac7136eb674fb8c22ea7f6dff50cd81e4857119d81112dc5ad99d/diff:/var/lib/docker/overlay2/be7effa3bb906b8c48aefab3cc72e657931bccd42c35b03bb52c679c37c70d25/diff:/var/lib/docker/overlay2/ccf8c1a68774cc6129c43490c675e6ba0ff0c88284ad899c9efb4b7492e92a06/diff:/var/lib/docker/overlay2/55f74cddb5c8f2da1131ddd67f7f2a20b8c2be8719b71047d5185fdb4722627d/diff:/var/lib/docker/overlay2/f6c38e83545e9de87d03b1e8e9b0239079acf47784afc11291b2172a11f8296d/diff:/var/lib/docker/overlay2/24604ce83180fcb6b9e1adbaf37db6776e4949ca5e1f4f9050b8fb8dc87d7591/diff:/var/lib/docker/overlay2/b03b1952bea32e53ce88b0d92d09f78ca76ceca0a146b088628be224813
ae87c/diff:/var/lib/docker/overlay2/d7ee02a5ba355c246c3107f286c4130457358aa56e0d8feb3850d953812fe76d/diff:/var/lib/docker/overlay2/1223ae0e5105ae89c78524490f0ccc5fe6dfa373e175fa70474b06c62aad91c7/diff:/var/lib/docker/overlay2/c3209d71eed94ed66b11ba3a7573c347d5c09a5a4fedb00418e52571abb2b5f9/diff:/var/lib/docker/overlay2/d6d32632e36023dec15e643fd41f77168442327cecdead87a47e70683cb0c660/diff:/var/lib/docker/overlay2/fd413b21f1f34027c98a1b4106f7e7db83e910860edd53ef838ac699659cd451/diff:/var/lib/docker/overlay2/9768e3244c157a7a12b8ec31507c1abc0fc806a9f325099262cfeb41e23a5fe1/diff:/var/lib/docker/overlay2/f65dda9d5b79903f5e68f6b9d4a59214d62f304da77faa7e148b90b402d98ca4/diff:/var/lib/docker/overlay2/b663c2dab6b5df7b606daa62e1df5e57f4a2503c2a9a19211f47359fd685dccf/diff:/var/lib/docker/overlay2/bbd620a7e494844db80a2bcd2fd6f170080c5273f1c33a576501214dd4475464/diff:/var/lib/docker/overlay2/23de36667695cec8ea0f4c98fc580213d356d39a3eba5aaab8b6ebeeb2b71596/diff:/var/lib/docker/overlay2/bafb3e8d91e83dcf824b6d5b3f56a67c483c42
723724b009a5aedf578351154f/diff:/var/lib/docker/overlay2/2798e8ce2f51dde257a9cf2dd800492f888fd02515c5f1de4133cc787ee12928/diff:/var/lib/docker/overlay2/e69a88f2dffdd2c7f72c45eaf2e3cd1a8772e0a2af22d6f34d2695bbda62b6e9/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a0a76f03907cc4fdd862105b9e5582d69bb80683fdad91fdcaf84f129aafa90a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a0a76f03907cc4fdd862105b9e5582d69bb80683fdad91fdcaf84f129aafa90a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a0a76f03907cc4fdd862105b9e5582d69bb80683fdad91fdcaf84f129aafa90a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-20220509085506-6723",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-20220509085506-6723/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-20220509085506-6723",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-20220509085506-6723",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-20220509085506-6723",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "556cc68177b6a263ee513219619b3591844711c76f258619c80756964e9f0259",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49329"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49328"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49325"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49327"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49326"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/556cc68177b6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-20220509085506-6723": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "634cae78b95c",
	                        "force-systemd-flag-20220509085506-6723"
	                    ],
	                    "NetworkID": "05654064b3d63fa7cb6d2a2f0ea78450dcaad1a8649d00f514687ff9fc963d05",
	                    "EndpointID": "fb589fa24bdef5dd5c68f1f4c2ef008422e8b2c17aaf546738d2ad69938990f4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p force-systemd-flag-20220509085506-6723 -n force-systemd-flag-20220509085506-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p force-systemd-flag-20220509085506-6723 -n force-systemd-flag-20220509085506-6723: exit status 6 (451.913557ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0509 08:56:17.544962  207933 status.go:413] kubeconfig endpoint: extract IP: "force-systemd-flag-20220509085506-6723" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "force-systemd-flag-20220509085506-6723" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "force-systemd-flag-20220509085506-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220509085506-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220509085506-6723: (2.186382921s)
--- FAIL: TestForceSystemdFlag (73.35s)

                                                
                                    
x
+
TestForceSystemdEnv (73.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220509085537-6723 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p force-systemd-env-20220509085537-6723 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 90 (1m9.877336272s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20220509085537-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14070
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node force-systemd-env-20220509085537-6723 in cluster force-systemd-env-20220509085537-6723
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:55:37.703700  196368 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:55:37.703818  196368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:55:37.703826  196368 out.go:309] Setting ErrFile to fd 2...
	I0509 08:55:37.703831  196368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:55:37.703932  196368 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:55:37.704216  196368 out.go:303] Setting JSON to false
	I0509 08:55:37.705891  196368 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2292,"bootTime":1652084246,"procs":1137,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 08:55:37.705964  196368 start.go:125] virtualization: kvm guest
	I0509 08:55:37.709069  196368 out.go:177] * [force-systemd-env-20220509085537-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0509 08:55:37.710815  196368 notify.go:193] Checking for updates...
	I0509 08:55:37.712575  196368 out.go:177]   - MINIKUBE_LOCATION=14070
	I0509 08:55:37.714489  196368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 08:55:37.716261  196368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:55:37.718013  196368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 08:55:37.719620  196368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0509 08:55:37.721388  196368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0509 08:55:37.723503  196368 config.go:178] Loaded profile config "force-systemd-flag-20220509085506-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:55:37.723593  196368 config.go:178] Loaded profile config "kubernetes-upgrade-20220509085441-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1-rc.0
	I0509 08:55:37.723672  196368 config.go:178] Loaded profile config "running-upgrade-20220509085501-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0509 08:55:37.723736  196368 driver.go:346] Setting default libvirt URI to qemu:///system
	I0509 08:55:37.769991  196368 docker.go:137] docker version: linux-20.10.15
	I0509 08:55:37.770107  196368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:55:37.886554  196368 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:59 SystemTime:2022-05-09 08:55:37.803693218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:37.886688  196368 docker.go:254] overlay module found
	I0509 08:55:37.889012  196368 out.go:177] * Using the docker driver based on user configuration
	I0509 08:55:37.890729  196368 start.go:284] selected driver: docker
	I0509 08:55:37.890746  196368 start.go:801] validating driver "docker" against <nil>
	I0509 08:55:37.890772  196368 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0509 08:55:37.890826  196368 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0509 08:55:37.890851  196368 out.go:239] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0509 08:55:37.892419  196368 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0509 08:55:37.894573  196368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:55:38.010131  196368 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:59 SystemTime:2022-05-09 08:55:37.925613578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:38.010254  196368 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0509 08:55:38.010426  196368 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0509 08:55:38.012852  196368 out.go:177] * Using Docker driver with the root privilege
	I0509 08:55:38.014495  196368 cni.go:95] Creating CNI manager for ""
	I0509 08:55:38.014517  196368 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0509 08:55:38.014533  196368 start_flags.go:306] config:
	{Name:force-systemd-env-20220509085537-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:force-systemd-env-20220509085537-6723 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0509 08:55:38.016699  196368 out.go:177] * Starting control plane node force-systemd-env-20220509085537-6723 in cluster force-systemd-env-20220509085537-6723
	I0509 08:55:38.018297  196368 cache.go:120] Beginning downloading kic base image for docker with docker
	I0509 08:55:38.019765  196368 out.go:177] * Pulling base image ...
	I0509 08:55:38.021243  196368 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 08:55:38.021292  196368 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4
	I0509 08:55:38.021315  196368 cache.go:57] Caching tarball of preloaded images
	I0509 08:55:38.021346  196368 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0509 08:55:38.021563  196368 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0509 08:55:38.021588  196368 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.0 on docker
	I0509 08:55:38.021695  196368 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/force-systemd-env-20220509085537-6723/config.json ...
	I0509 08:55:38.021726  196368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/force-systemd-env-20220509085537-6723/config.json: {Name:mk10b892bd3145ed5729c135fc9d141fe4000a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:55:38.066796  196368 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0509 08:55:38.066831  196368 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0509 08:55:38.066853  196368 cache.go:206] Successfully downloaded all kic artifacts
	I0509 08:55:38.066883  196368 start.go:352] acquiring machines lock for force-systemd-env-20220509085537-6723: {Name:mk86ae22ada434fcb6df3584d0f985a8229f2ce0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:38.067770  196368 start.go:356] acquired machines lock for "force-systemd-env-20220509085537-6723" in 862.689µs
	I0509 08:55:38.067813  196368 start.go:91] Provisioning new machine with config: &{Name:force-systemd-env-20220509085537-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:force-systemd-env-20220509085537-6723 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1
.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0509 08:55:38.067923  196368 start.go:131] createHost starting for "" (driver="docker")
	I0509 08:55:38.070129  196368 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0509 08:55:38.070378  196368 start.go:165] libmachine.API.Create for "force-systemd-env-20220509085537-6723" (driver="docker")
	I0509 08:55:38.070413  196368 client.go:168] LocalClient.Create starting
	I0509 08:55:38.070491  196368 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem
	I0509 08:55:38.070528  196368 main.go:134] libmachine: Decoding PEM data...
	I0509 08:55:38.070552  196368 main.go:134] libmachine: Parsing certificate...
	I0509 08:55:38.070630  196368 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem
	I0509 08:55:38.070657  196368 main.go:134] libmachine: Decoding PEM data...
	I0509 08:55:38.070670  196368 main.go:134] libmachine: Parsing certificate...
	I0509 08:55:38.071004  196368 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220509085537-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0509 08:55:38.107366  196368 cli_runner.go:211] docker network inspect force-systemd-env-20220509085537-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0509 08:55:38.107469  196368 network_create.go:272] running [docker network inspect force-systemd-env-20220509085537-6723] to gather additional debugging logs...
	I0509 08:55:38.107498  196368 cli_runner.go:164] Run: docker network inspect force-systemd-env-20220509085537-6723
	W0509 08:55:38.141798  196368 cli_runner.go:211] docker network inspect force-systemd-env-20220509085537-6723 returned with exit code 1
	I0509 08:55:38.141838  196368 network_create.go:275] error running [docker network inspect force-systemd-env-20220509085537-6723]: docker network inspect force-systemd-env-20220509085537-6723: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20220509085537-6723
	I0509 08:55:38.141859  196368 network_create.go:277] output of [docker network inspect force-systemd-env-20220509085537-6723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20220509085537-6723
	
	** /stderr **
	I0509 08:55:38.141906  196368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0509 08:55:38.199301  196368 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-05654064b3d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d0:03:24:57}}
	I0509 08:55:38.200015  196368 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0006184e0] misses:0}
	I0509 08:55:38.200063  196368 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0509 08:55:38.200084  196368 network_create.go:115] attempt to create docker network force-systemd-env-20220509085537-6723 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0509 08:55:38.200147  196368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20220509085537-6723
	I0509 08:55:38.280367  196368 network_create.go:99] docker network force-systemd-env-20220509085537-6723 192.168.58.0/24 created
	I0509 08:55:38.280407  196368 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-env-20220509085537-6723" container
	I0509 08:55:38.280475  196368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0509 08:55:38.344181  196368 cli_runner.go:164] Run: docker volume create force-systemd-env-20220509085537-6723 --label name.minikube.sigs.k8s.io=force-systemd-env-20220509085537-6723 --label created_by.minikube.sigs.k8s.io=true
	I0509 08:55:38.382960  196368 oci.go:103] Successfully created a docker volume force-systemd-env-20220509085537-6723
	I0509 08:55:38.383049  196368 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-20220509085537-6723-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220509085537-6723 --entrypoint /usr/bin/test -v force-systemd-env-20220509085537-6723:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0509 08:55:40.809302  196368 cli_runner.go:217] Completed: docker run --rm --name force-systemd-env-20220509085537-6723-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220509085537-6723 --entrypoint /usr/bin/test -v force-systemd-env-20220509085537-6723:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (2.426207207s)
	I0509 08:55:40.809336  196368 oci.go:107] Successfully prepared a docker volume force-systemd-env-20220509085537-6723
	I0509 08:55:40.809383  196368 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 08:55:40.809409  196368 kic.go:179] Starting extracting preloaded images to volume ...
	I0509 08:55:40.809480  196368 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20220509085537-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0509 08:55:47.766458  196368 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20220509085537-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (6.956920871s)
	I0509 08:55:47.766503  196368 kic.go:188] duration metric: took 6.957088 seconds to extract preloaded images to volume
	W0509 08:55:47.766554  196368 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0509 08:55:47.766568  196368 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0509 08:55:47.766620  196368 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0509 08:55:47.877765  196368 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20220509085537-6723 --name force-systemd-env-20220509085537-6723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20220509085537-6723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20220509085537-6723 --network force-systemd-env-20220509085537-6723 --ip 192.168.58.2 --volume force-systemd-env-20220509085537-6723:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0509 08:55:48.344543  196368 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220509085537-6723 --format={{.State.Running}}
	I0509 08:55:48.379539  196368 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220509085537-6723 --format={{.State.Status}}
	I0509 08:55:48.415667  196368 cli_runner.go:164] Run: docker exec force-systemd-env-20220509085537-6723 stat /var/lib/dpkg/alternatives/iptables
	I0509 08:55:48.479300  196368 oci.go:279] the created container "force-systemd-env-20220509085537-6723" has a running status.
	I0509 08:55:48.479375  196368 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-env-20220509085537-6723/id_rsa...
	I0509 08:55:48.605398  196368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-env-20220509085537-6723/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0509 08:55:48.605454  196368 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-env-20220509085537-6723/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0509 08:55:48.716015  196368 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220509085537-6723 --format={{.State.Status}}
	I0509 08:55:48.758880  196368 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0509 08:55:48.758907  196368 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-20220509085537-6723 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0509 08:55:48.864819  196368 cli_runner.go:164] Run: docker container inspect force-systemd-env-20220509085537-6723 --format={{.State.Status}}
	I0509 08:55:48.907919  196368 machine.go:88] provisioning docker machine ...
	I0509 08:55:48.907966  196368 ubuntu.go:169] provisioning hostname "force-systemd-env-20220509085537-6723"
	I0509 08:55:48.908041  196368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220509085537-6723
	I0509 08:55:48.945952  196368 main.go:134] libmachine: Using SSH client type: native
	I0509 08:55:48.946126  196368 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49334 <nil> <nil>}
	I0509 08:55:48.946144  196368 main.go:134] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-20220509085537-6723 && echo "force-systemd-env-20220509085537-6723" | sudo tee /etc/hostname
	I0509 08:55:49.090740  196368 main.go:134] libmachine: SSH cmd err, output: <nil>: force-systemd-env-20220509085537-6723
	
	I0509 08:55:49.090838  196368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220509085537-6723
	I0509 08:55:49.126542  196368 main.go:134] libmachine: Using SSH client type: native
	I0509 08:55:49.126691  196368 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49334 <nil> <nil>}
	I0509 08:55:49.126712  196368 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-20220509085537-6723' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-20220509085537-6723/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-20220509085537-6723' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0509 08:55:49.252900  196368 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0509 08:55:49.252933  196368 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube}
	I0509 08:55:49.252991  196368 ubuntu.go:177] setting up certificates
	I0509 08:55:49.253008  196368 provision.go:83] configureAuth start
	I0509 08:55:49.253077  196368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20220509085537-6723
	I0509 08:55:49.291741  196368 provision.go:138] copyHostCerts
	I0509 08:55:49.291789  196368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem
	I0509 08:55:49.291823  196368 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem, removing ...
	I0509 08:55:49.291832  196368 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem
	I0509 08:55:49.291915  196368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem (1078 bytes)
	I0509 08:55:49.292001  196368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem
	I0509 08:55:49.292031  196368 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem, removing ...
	I0509 08:55:49.292038  196368 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem
	I0509 08:55:49.292076  196368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem (1123 bytes)
	I0509 08:55:49.292128  196368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem
	I0509 08:55:49.292151  196368 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem, removing ...
	I0509 08:55:49.292157  196368 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem
	I0509 08:55:49.292200  196368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem (1679 bytes)
	I0509 08:55:49.292285  196368 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-20220509085537-6723 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-env-20220509085537-6723]
	I0509 08:55:49.356514  196368 provision.go:172] copyRemoteCerts
	I0509 08:55:49.356589  196368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0509 08:55:49.356645  196368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220509085537-6723
	I0509 08:55:49.395783  196368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49334 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-env-20220509085537-6723/id_rsa Username:docker}
	I0509 08:55:49.491123  196368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0509 08:55:49.491203  196368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0509 08:55:49.509862  196368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0509 08:55:49.509921  196368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0509 08:55:49.527412  196368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0509 08:55:49.527479  196368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0509 08:55:49.545053  196368 provision.go:86] duration metric: configureAuth took 292.027386ms
	I0509 08:55:49.545089  196368 ubuntu.go:193] setting minikube options for container-runtime
	I0509 08:55:49.545232  196368 config.go:178] Loaded profile config "force-systemd-env-20220509085537-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:55:49.545279  196368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220509085537-6723
	I0509 08:55:49.582582  196368 main.go:134] libmachine: Using SSH client type: native
	I0509 08:55:49.582797  196368 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49334 <nil> <nil>}
	I0509 08:55:49.582818  196368 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0509 08:55:49.705852  196368 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0509 08:55:49.705879  196368 ubuntu.go:71] root file system type: overlay
	I0509 08:55:49.706052  196368 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0509 08:55:49.706121  196368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220509085537-6723
	I0509 08:55:49.741030  196368 main.go:134] libmachine: Using SSH client type: native
	I0509 08:55:49.741181  196368 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49334 <nil> <nil>}
	I0509 08:55:49.741247  196368 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0509 08:55:49.870685  196368 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0509 08:55:49.870772  196368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220509085537-6723
	I0509 08:55:49.903746  196368 main.go:134] libmachine: Using SSH client type: native
	I0509 08:55:49.903914  196368 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49334 <nil> <nil>}
	I0509 08:55:49.903943  196368 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0509 08:55:50.644341  196368 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-09 08:55:49.867436341 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0509 08:55:50.644381  196368 machine.go:91] provisioned docker machine in 1.736433606s
	I0509 08:55:50.644392  196368 client.go:171] LocalClient.Create took 12.573966653s
	I0509 08:55:50.644405  196368 start.go:173] duration metric: libmachine.API.Create for "force-systemd-env-20220509085537-6723" took 12.574027437s
	I0509 08:55:50.644419  196368 start.go:306] post-start starting for "force-systemd-env-20220509085537-6723" (driver="docker")
	I0509 08:55:50.644425  196368 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0509 08:55:50.644490  196368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0509 08:55:50.644538  196368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220509085537-6723
	I0509 08:55:50.680061  196368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49334 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-env-20220509085537-6723/id_rsa Username:docker}
	I0509 08:55:50.773248  196368 ssh_runner.go:195] Run: cat /etc/os-release
	I0509 08:55:50.776455  196368 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0509 08:55:50.776487  196368 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0509 08:55:50.776500  196368 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0509 08:55:50.776507  196368 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0509 08:55:50.776524  196368 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/addons for local assets ...
	I0509 08:55:50.776588  196368 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files for local assets ...
	I0509 08:55:50.776718  196368 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem -> 67232.pem in /etc/ssl/certs
	I0509 08:55:50.776742  196368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem -> /etc/ssl/certs/67232.pem
	I0509 08:55:50.776849  196368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0509 08:55:50.785725  196368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem --> /etc/ssl/certs/67232.pem (1708 bytes)
	I0509 08:55:50.809266  196368 start.go:309] post-start completed in 164.834174ms
	I0509 08:55:50.809673  196368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20220509085537-6723
	I0509 08:55:50.841875  196368 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/force-systemd-env-20220509085537-6723/config.json ...
	I0509 08:55:50.842118  196368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 08:55:50.842165  196368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220509085537-6723
	I0509 08:55:50.880732  196368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49334 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-env-20220509085537-6723/id_rsa Username:docker}
	I0509 08:55:50.969884  196368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0509 08:55:50.974997  196368 start.go:134] duration metric: createHost completed in 12.907058732s
	I0509 08:55:50.976810  196368 start.go:81] releasing machines lock for "force-systemd-env-20220509085537-6723", held for 12.908991953s
	I0509 08:55:50.977113  196368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20220509085537-6723
	I0509 08:55:51.016442  196368 ssh_runner.go:195] Run: systemctl --version
	I0509 08:55:51.016505  196368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220509085537-6723
	I0509 08:55:51.016512  196368 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0509 08:55:51.016590  196368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20220509085537-6723
	I0509 08:55:51.049390  196368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49334 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-env-20220509085537-6723/id_rsa Username:docker}
	I0509 08:55:51.049559  196368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49334 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/force-systemd-env-20220509085537-6723/id_rsa Username:docker}
	I0509 08:55:51.154858  196368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0509 08:55:51.165817  196368 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0509 08:55:51.176584  196368 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0509 08:55:51.176690  196368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0509 08:55:51.188961  196368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0509 08:55:51.205886  196368 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0509 08:55:51.289731  196368 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0509 08:55:51.377840  196368 docker.go:510] Forcing docker to use systemd as cgroup manager...
	I0509 08:55:51.377883  196368 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes)
	I0509 08:55:51.397122  196368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0509 08:55:51.476037  196368 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0509 08:55:51.750454  196368 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0509 08:55:51.750524  196368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:51.757126  196368 retry.go:31] will retry after 1.104660288s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:55:52.862310  196368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:52.866400  196368 retry.go:31] will retry after 2.160763633s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:55:55.028753  196368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:55.032562  196368 retry.go:31] will retry after 2.62026012s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:55:57.653803  196368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:55:57.657821  196368 retry.go:31] will retry after 3.164785382s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:56:00.824736  196368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:56:00.828379  196368 retry.go:31] will retry after 4.680977329s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:56:05.510054  196368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:56:05.513776  196368 retry.go:31] will retry after 9.01243771s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:56:14.527678  196368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:56:14.531945  196368 retry.go:31] will retry after 6.442959172s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:56:20.976821  196368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:56:20.981151  196368 retry.go:31] will retry after 11.217246954s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:56:32.200767  196368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:56:32.204709  196368 retry.go:31] will retry after 15.299675834s: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	I0509 08:56:47.504771  196368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 08:56:47.511505  196368 out.go:177] 
	W0509 08:56:47.513345  196368 out.go:239] X Exiting due to RUNTIME_ENABLE: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: stat /var/run/cri-dockerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/cri-dockerd.sock': No such file or directory
	
	W0509 08:56:47.513377  196368 out.go:239] * 
	* 
	W0509 08:56:47.514144  196368 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0509 08:56:47.515704  196368 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:152: failed to start minikube with args: "out/minikube-linux-amd64 start -p force-systemd-env-20220509085537-6723 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 90
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220509085537-6723 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:161: *** TestForceSystemdEnv FAILED at 2022-05-09 08:56:48.075608514 +0000 UTC m=+1933.519189129
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-20220509085537-6723
helpers_test.go:235: (dbg) docker inspect force-systemd-env-20220509085537-6723:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7faf13aee668810657bc38da03e782404edd02cc3bdcc367d5e43eeb9de138cd",
	        "Created": "2022-05-09T08:55:47.915608725Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 198845,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-09T08:55:48.333527068Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/7faf13aee668810657bc38da03e782404edd02cc3bdcc367d5e43eeb9de138cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7faf13aee668810657bc38da03e782404edd02cc3bdcc367d5e43eeb9de138cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/7faf13aee668810657bc38da03e782404edd02cc3bdcc367d5e43eeb9de138cd/hosts",
	        "LogPath": "/var/lib/docker/containers/7faf13aee668810657bc38da03e782404edd02cc3bdcc367d5e43eeb9de138cd/7faf13aee668810657bc38da03e782404edd02cc3bdcc367d5e43eeb9de138cd-json.log",
	        "Name": "/force-systemd-env-20220509085537-6723",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-20220509085537-6723:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-20220509085537-6723",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e69a9b27bacdb52367e9be11f9690d0b792694998b22f018de0a9812af62737-init/diff:/var/lib/docker/overlay2/beaaca4c58fe6ff4bdb88567c3d78ab7a23955eafaa5637df03ee2e0482d2aa6/diff:/var/lib/docker/overlay2/7c16b810bbfa3a2abff75078fa37b4cba0b2f101ff43d49beaabc3fd2602b1c9/diff:/var/lib/docker/overlay2/60f04c0e4baa8ad1c02ae5e34e6f505db0d740e2d7dc0833b2ff3b8037c1a9b6/diff:/var/lib/docker/overlay2/a12543300ae4ff803b2f0493a60a04a921312ec5a7b6ed493e66acadf998daef/diff:/var/lib/docker/overlay2/2d68f658a64cd8b7255bce93547db5d1b20b119ef6da8b9ce06614134661f235/diff:/var/lib/docker/overlay2/0f968f210c1565f6e8c4e444c650502c06a120d8767c9fefb7b6b2a09f4af83b/diff:/var/lib/docker/overlay2/987a67893acdccd514357a50db6c11680ae1899ec07a36085367a241ba1f0545/diff:/var/lib/docker/overlay2/d5446d1adc007f4390aa6173ef257b0bf52a9c9cc6533f16dd6c577fde5334b3/diff:/var/lib/docker/overlay2/265dd0cde77d578f6412ad48596c75c3548b5e077059c0fa835e4f22775fab83/diff:/var/lib/docker/overlay2/e6b98b
08dd64e06639e2a321169f84a89e8d21cea02d673819156eeb7a4747c3/diff:/var/lib/docker/overlay2/acf7117613a840e97b7a1101bfdce3a154a335ee0ffcaf74bd7f1b27b00cdaab/diff:/var/lib/docker/overlay2/9e88fa64aa800dac6e01eddc9cfd525a0dcd906c1adf98de066be87f87a6b52c/diff:/var/lib/docker/overlay2/6c539edead197cd449ca81fd34e3c534a6ca25446a52141d0ae4484aaac05482/diff:/var/lib/docker/overlay2/866719c52f41a692b94f42071d314de55b290912f946f4ec5f3785a7a5ae40df/diff:/var/lib/docker/overlay2/1c488a9b3bad141652873f4085ce1a466e1f1e8ccbd086d03a492b45179a6064/diff:/var/lib/docker/overlay2/bce5b942da09326e5a5c8595077544ced08031b1cd9b22ff8bff0a4458540139/diff:/var/lib/docker/overlay2/ffe34c3764a4eeac791f4672d47389ec3f399f716ab6e5531d7c5e587f3ace00/diff:/var/lib/docker/overlay2/a377e0779d03467b09a26898ae35b12f325d62984304817f49c693b2146c08f4/diff:/var/lib/docker/overlay2/6092380f07b29488cf0a30cb486638d86eaa8e00ff356a7985c6ac6f2fae5c1d/diff:/var/lib/docker/overlay2/1bc014cc0cf0a91c61f131c06b0194709f853dec23defe9233cbd9cc40030c28/diff:/var/lib/d
ocker/overlay2/0c81c4db7384c48318a800165af7f811348a8081efdd9dbd912e05e55c9eb4e0/diff:/var/lib/docker/overlay2/72b0c515d90bc71e27b766a9be89e315777a5bbe643d8fc508a9ae12557a58ce/diff:/var/lib/docker/overlay2/a4d193bf8c377d4cf1357b9261d3c54d995f17bda3db5abee3ef5caec001d75e/diff:/var/lib/docker/overlay2/763c6d291074b0842ecdeec1f3842fd6a0af0cb86839c82bc38cec0f40d095ca/diff:/var/lib/docker/overlay2/e4156eb2a94ac7136eb674fb8c22ea7f6dff50cd81e4857119d81112dc5ad99d/diff:/var/lib/docker/overlay2/be7effa3bb906b8c48aefab3cc72e657931bccd42c35b03bb52c679c37c70d25/diff:/var/lib/docker/overlay2/ccf8c1a68774cc6129c43490c675e6ba0ff0c88284ad899c9efb4b7492e92a06/diff:/var/lib/docker/overlay2/55f74cddb5c8f2da1131ddd67f7f2a20b8c2be8719b71047d5185fdb4722627d/diff:/var/lib/docker/overlay2/f6c38e83545e9de87d03b1e8e9b0239079acf47784afc11291b2172a11f8296d/diff:/var/lib/docker/overlay2/24604ce83180fcb6b9e1adbaf37db6776e4949ca5e1f4f9050b8fb8dc87d7591/diff:/var/lib/docker/overlay2/b03b1952bea32e53ce88b0d92d09f78ca76ceca0a146b088628be224813
ae87c/diff:/var/lib/docker/overlay2/d7ee02a5ba355c246c3107f286c4130457358aa56e0d8feb3850d953812fe76d/diff:/var/lib/docker/overlay2/1223ae0e5105ae89c78524490f0ccc5fe6dfa373e175fa70474b06c62aad91c7/diff:/var/lib/docker/overlay2/c3209d71eed94ed66b11ba3a7573c347d5c09a5a4fedb00418e52571abb2b5f9/diff:/var/lib/docker/overlay2/d6d32632e36023dec15e643fd41f77168442327cecdead87a47e70683cb0c660/diff:/var/lib/docker/overlay2/fd413b21f1f34027c98a1b4106f7e7db83e910860edd53ef838ac699659cd451/diff:/var/lib/docker/overlay2/9768e3244c157a7a12b8ec31507c1abc0fc806a9f325099262cfeb41e23a5fe1/diff:/var/lib/docker/overlay2/f65dda9d5b79903f5e68f6b9d4a59214d62f304da77faa7e148b90b402d98ca4/diff:/var/lib/docker/overlay2/b663c2dab6b5df7b606daa62e1df5e57f4a2503c2a9a19211f47359fd685dccf/diff:/var/lib/docker/overlay2/bbd620a7e494844db80a2bcd2fd6f170080c5273f1c33a576501214dd4475464/diff:/var/lib/docker/overlay2/23de36667695cec8ea0f4c98fc580213d356d39a3eba5aaab8b6ebeeb2b71596/diff:/var/lib/docker/overlay2/bafb3e8d91e83dcf824b6d5b3f56a67c483c42
723724b009a5aedf578351154f/diff:/var/lib/docker/overlay2/2798e8ce2f51dde257a9cf2dd800492f888fd02515c5f1de4133cc787ee12928/diff:/var/lib/docker/overlay2/e69a88f2dffdd2c7f72c45eaf2e3cd1a8772e0a2af22d6f34d2695bbda62b6e9/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e69a9b27bacdb52367e9be11f9690d0b792694998b22f018de0a9812af62737/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e69a9b27bacdb52367e9be11f9690d0b792694998b22f018de0a9812af62737/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e69a9b27bacdb52367e9be11f9690d0b792694998b22f018de0a9812af62737/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-20220509085537-6723",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-20220509085537-6723/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-20220509085537-6723",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-20220509085537-6723",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-20220509085537-6723",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "81ff9db1c4dead6edf835c240df1a91ef8282049955a0524b5f26fbdd3924f56",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49334"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49333"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49330"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49332"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49331"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/81ff9db1c4de",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-20220509085537-6723": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7faf13aee668",
	                        "force-systemd-env-20220509085537-6723"
	                    ],
	                    "NetworkID": "010984d42916aa4062269c1b5f71c7d96c1b1b58a0d12df1d65abb048a73a54d",
	                    "EndpointID": "b37e72efa617abdec318b406922b62a4193d6544c5354f004c7b66e0b15b296b",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p force-systemd-env-20220509085537-6723 -n force-systemd-env-20220509085537-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p force-systemd-env-20220509085537-6723 -n force-systemd-env-20220509085537-6723: exit status 6 (448.611476ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0509 08:56:48.565252  215869 status.go:413] kubeconfig endpoint: extract IP: "force-systemd-env-20220509085537-6723" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "force-systemd-env-20220509085537-6723" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "force-systemd-env-20220509085537-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220509085537-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220509085537-6723: (2.255150768s)
--- FAIL: TestForceSystemdEnv (73.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (71.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220509085441-6723 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220509085441-6723 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.462160384s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220509085441-6723

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220509085441-6723: (11.256988315s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220509085441-6723 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220509085441-6723 status --format={{.Host}}: exit status 7 (132.152986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220509085441-6723 --memory=2200 --kubernetes-version=v1.24.1-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220509085441-6723 --memory=2200 --kubernetes-version=v1.24.1-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 80 (15.974032823s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220509085441-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14070
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node kubernetes-upgrade-20220509085441-6723 in cluster kubernetes-upgrade-20220509085441-6723
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-20220509085441-6723" ...
	* Restarting existing docker container for "kubernetes-upgrade-20220509085441-6723" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:55:35.711295  195809 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:55:35.711434  195809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:55:35.711446  195809 out.go:309] Setting ErrFile to fd 2...
	I0509 08:55:35.711455  195809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:55:35.711620  195809 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:55:35.711945  195809 out.go:303] Setting JSON to false
	I0509 08:55:35.714222  195809 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2290,"bootTime":1652084246,"procs":1138,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 08:55:35.714317  195809 start.go:125] virtualization: kvm guest
	I0509 08:55:35.717775  195809 out.go:177] * [kubernetes-upgrade-20220509085441-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0509 08:55:35.719610  195809 out.go:177]   - MINIKUBE_LOCATION=14070
	I0509 08:55:35.719552  195809 notify.go:193] Checking for updates...
	I0509 08:55:35.722772  195809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 08:55:35.724381  195809 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:55:35.725979  195809 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 08:55:35.727691  195809 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0509 08:55:35.729731  195809 config.go:178] Loaded profile config "kubernetes-upgrade-20220509085441-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0509 08:55:35.730391  195809 driver.go:346] Setting default libvirt URI to qemu:///system
	I0509 08:55:35.779271  195809 docker.go:137] docker version: linux-20.10.15
	I0509 08:55:35.779389  195809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:55:35.901041  195809 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:61 SystemTime:2022-05-09 08:55:35.81254138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:35.901203  195809 docker.go:254] overlay module found
	I0509 08:55:35.904120  195809 out.go:177] * Using the docker driver based on existing profile
	I0509 08:55:35.905546  195809 start.go:284] selected driver: docker
	I0509 08:55:35.905569  195809 start.go:801] validating driver "docker" against &{Name:kubernetes-upgrade-20220509085441-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220509085441-6723 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false}
	I0509 08:55:35.905701  195809 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0509 08:55:35.905745  195809 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0509 08:55:35.905770  195809 out.go:239] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0509 08:55:35.907167  195809 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0509 08:55:35.909287  195809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:55:36.032732  195809 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:59 SystemTime:2022-05-09 08:55:35.943765594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0509 08:55:36.032932  195809 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0509 08:55:36.032958  195809 out.go:239] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0509 08:55:36.035313  195809 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0509 08:55:36.036974  195809 cni.go:95] Creating CNI manager for ""
	I0509 08:55:36.037006  195809 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0509 08:55:36.037019  195809 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220509085441-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1-rc.0 ClusterName:kubernetes-upgrade-20220509085441-6723 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0509 08:55:36.038914  195809 out.go:177] * Starting control plane node kubernetes-upgrade-20220509085441-6723 in cluster kubernetes-upgrade-20220509085441-6723
	I0509 08:55:36.040286  195809 cache.go:120] Beginning downloading kic base image for docker with docker
	I0509 08:55:36.041998  195809 out.go:177] * Pulling base image ...
	I0509 08:55:36.043365  195809 preload.go:132] Checking if preload exists for k8s version v1.24.1-rc.0 and runtime docker
	I0509 08:55:36.043488  195809 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0509 08:55:36.087201  195809 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0509 08:55:36.087230  195809 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	W0509 08:55:36.137322  195809 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1-rc.0/preloaded-images-k8s-v18-v1.24.1-rc.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0509 08:55:36.137474  195809 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kubernetes-upgrade-20220509085441-6723/config.json ...
	I0509 08:55:36.137532  195809 cache.go:107] acquiring lock: {Name:mk1569f6206cc0716265219e09c430414472b598 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:36.137584  195809 cache.go:107] acquiring lock: {Name:mk75fa60ea4c6bd12e5117018b2138616318dcf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:36.137582  195809 cache.go:107] acquiring lock: {Name:mk02b239292b1330548320cb0856f2d5bd377dd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:36.137628  195809 cache.go:107] acquiring lock: {Name:mk69f40cd6bee92acb0de362041daeb54b4baad2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:36.137550  195809 cache.go:107] acquiring lock: {Name:mk2079e3af58e09016e3aa56ce438b7135b6a1e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:36.137662  195809 cache.go:107] acquiring lock: {Name:mk0034a75f0a91d03aa12d52ea9bd59e9bf92418 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:36.137662  195809 cache.go:107] acquiring lock: {Name:mk3fd38b6eabac2624d636789aa7e8b99c525e7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:36.137758  195809 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0509 08:55:36.137781  195809 cache.go:206] Successfully downloaded all kic artifacts
	I0509 08:55:36.137781  195809 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 258.521µs
	I0509 08:55:36.137794  195809 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0509 08:55:36.137795  195809 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0509 08:55:36.137551  195809 cache.go:107] acquiring lock: {Name:mk7b0f925d4840d486112d2d0a793ba574df1329 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:36.137810  195809 start.go:352] acquiring machines lock for kubernetes-upgrade-20220509085441-6723: {Name:mk4b33cce4d6cf1758fd22377d4a179c0c038c8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:36.137819  195809 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.1-rc.0 exists
	I0509 08:55:36.137826  195809 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 165.244µs
	I0509 08:55:36.137844  195809 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 exists
	I0509 08:55:36.137851  195809 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0509 08:55:36.137834  195809 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.24.1-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.1-rc.0" took 251.87µs
	I0509 08:55:36.137865  195809 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.24.1-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.1-rc.0 succeeded
	I0509 08:55:36.137859  195809 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 exists
	I0509 08:55:36.137873  195809 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.1-rc.0 exists
	I0509 08:55:36.137879  195809 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.1-rc.0 exists
	I0509 08:55:36.137868  195809 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0" took 327.253µs
	I0509 08:55:36.137891  195809 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 succeeded
	I0509 08:55:36.137880  195809 start.go:356] acquired machines lock for "kubernetes-upgrade-20220509085441-6723" in 58.016µs
	I0509 08:55:36.137891  195809 cache.go:96] cache image "k8s.gcr.io/pause:3.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7" took 310.854µs
	I0509 08:55:36.137903  195809 cache.go:80] save to tar file k8s.gcr.io/pause:3.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 succeeded
	I0509 08:55:36.137907  195809 start.go:94] Skipping create...Using existing machine configuration
	I0509 08:55:36.137898  195809 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.24.1-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.1-rc.0" took 371.35µs
	I0509 08:55:36.137897  195809 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.24.1-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.1-rc.0" took 238.149µs
	I0509 08:55:36.137915  195809 fix.go:55] fixHost starting: 
	I0509 08:55:36.137918  195809 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.24.1-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.1-rc.0 succeeded
	I0509 08:55:36.137919  195809 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.24.1-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.1-rc.0 succeeded
	I0509 08:55:36.137877  195809 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.1-rc.0 exists
	I0509 08:55:36.138019  195809 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.24.1-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.1-rc.0" took 427.663µs
	I0509 08:55:36.138034  195809 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.24.1-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.1-rc.0 succeeded
	I0509 08:55:36.138049  195809 cache.go:87] Successfully saved all images to host disk.
	I0509 08:55:36.138177  195809 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220509085441-6723 --format={{.State.Status}}
	I0509 08:55:36.175148  195809 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220509085441-6723: state=Stopped err=<nil>
	W0509 08:55:36.175202  195809 fix.go:129] unexpected machine state, will restart: <nil>
	I0509 08:55:36.179233  195809 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220509085441-6723" ...
	I0509 08:55:36.180919  195809 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220509085441-6723
	W0509 08:55:36.371515  195809 cli_runner.go:211] docker start kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:36.371624  195809 cli_runner.go:164] Run: docker inspect kubernetes-upgrade-20220509085441-6723
	I0509 08:55:36.405999  195809 errors.go:84] Postmortem inspect ("docker inspect kubernetes-upgrade-20220509085441-6723"): -- stdout --
	[
	    {
	        "Id": "9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55",
	        "Created": "2022-05-09T08:54:48.053286897Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network ab3dc0e987463e0d1919f0f30610dba9f36f6956dd59c0979fddc98d5b653e0f not found",
	            "StartedAt": "2022-05-09T08:54:48.472807369Z",
	            "FinishedAt": "2022-05-09T08:55:34.933426699Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/hostname",
	        "HostsPath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/hosts",
	        "LogPath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55-json.log",
	        "Name": "/kubernetes-upgrade-20220509085441-6723",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-20220509085441-6723:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220509085441-6723",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866-init/diff:/var/lib/docker/overlay2/beaaca4c58fe6ff4bdb88567c3d78ab7a23955eafaa5637df03ee2e0482d2aa6/diff:/var/lib/docker/overlay2/7c16b810bbfa3a2abff75078fa37b4cba0b2f101ff43d49beaabc3fd2602b1c9/diff:/var/lib/docker/overlay2/60f04c0e4baa8ad1c02ae5e34e6f505db0d740e2d7dc0833b2ff3b8037c1a9b6/diff:/var/lib/docker/overlay2/a12543300ae4ff803b2f0493a60a04a921312ec5a7b6ed493e66acadf998daef/diff:/var/lib/docker/overlay2/2d68f658a64cd8b7255bce93547db5d1b20b119ef6da8b9ce06614134661f235/diff:/var/lib/docker/overlay2/0f968f210c1565f6e8c4e444c650502c06a120d8767c9fefb7b6b2a09f4af83b/diff:/var/lib/docker/overlay2/987a67893acdccd514357a50db6c11680ae1899ec07a36085367a241ba1f0545/diff:/var/lib/docker/overlay2/d5446d1adc007f4390aa6173ef257b0bf52a9c9cc6533f16dd6c577fde5334b3/diff:/var/lib/docker/overlay2/265dd0cde77d578f6412ad48596c75c3548b5e077059c0fa835e4f22775fab83/diff:/var/lib/docker/overlay2/e6b98b
08dd64e06639e2a321169f84a89e8d21cea02d673819156eeb7a4747c3/diff:/var/lib/docker/overlay2/acf7117613a840e97b7a1101bfdce3a154a335ee0ffcaf74bd7f1b27b00cdaab/diff:/var/lib/docker/overlay2/9e88fa64aa800dac6e01eddc9cfd525a0dcd906c1adf98de066be87f87a6b52c/diff:/var/lib/docker/overlay2/6c539edead197cd449ca81fd34e3c534a6ca25446a52141d0ae4484aaac05482/diff:/var/lib/docker/overlay2/866719c52f41a692b94f42071d314de55b290912f946f4ec5f3785a7a5ae40df/diff:/var/lib/docker/overlay2/1c488a9b3bad141652873f4085ce1a466e1f1e8ccbd086d03a492b45179a6064/diff:/var/lib/docker/overlay2/bce5b942da09326e5a5c8595077544ced08031b1cd9b22ff8bff0a4458540139/diff:/var/lib/docker/overlay2/ffe34c3764a4eeac791f4672d47389ec3f399f716ab6e5531d7c5e587f3ace00/diff:/var/lib/docker/overlay2/a377e0779d03467b09a26898ae35b12f325d62984304817f49c693b2146c08f4/diff:/var/lib/docker/overlay2/6092380f07b29488cf0a30cb486638d86eaa8e00ff356a7985c6ac6f2fae5c1d/diff:/var/lib/docker/overlay2/1bc014cc0cf0a91c61f131c06b0194709f853dec23defe9233cbd9cc40030c28/diff:/var/lib/d
ocker/overlay2/0c81c4db7384c48318a800165af7f811348a8081efdd9dbd912e05e55c9eb4e0/diff:/var/lib/docker/overlay2/72b0c515d90bc71e27b766a9be89e315777a5bbe643d8fc508a9ae12557a58ce/diff:/var/lib/docker/overlay2/a4d193bf8c377d4cf1357b9261d3c54d995f17bda3db5abee3ef5caec001d75e/diff:/var/lib/docker/overlay2/763c6d291074b0842ecdeec1f3842fd6a0af0cb86839c82bc38cec0f40d095ca/diff:/var/lib/docker/overlay2/e4156eb2a94ac7136eb674fb8c22ea7f6dff50cd81e4857119d81112dc5ad99d/diff:/var/lib/docker/overlay2/be7effa3bb906b8c48aefab3cc72e657931bccd42c35b03bb52c679c37c70d25/diff:/var/lib/docker/overlay2/ccf8c1a68774cc6129c43490c675e6ba0ff0c88284ad899c9efb4b7492e92a06/diff:/var/lib/docker/overlay2/55f74cddb5c8f2da1131ddd67f7f2a20b8c2be8719b71047d5185fdb4722627d/diff:/var/lib/docker/overlay2/f6c38e83545e9de87d03b1e8e9b0239079acf47784afc11291b2172a11f8296d/diff:/var/lib/docker/overlay2/24604ce83180fcb6b9e1adbaf37db6776e4949ca5e1f4f9050b8fb8dc87d7591/diff:/var/lib/docker/overlay2/b03b1952bea32e53ce88b0d92d09f78ca76ceca0a146b088628be224813
ae87c/diff:/var/lib/docker/overlay2/d7ee02a5ba355c246c3107f286c4130457358aa56e0d8feb3850d953812fe76d/diff:/var/lib/docker/overlay2/1223ae0e5105ae89c78524490f0ccc5fe6dfa373e175fa70474b06c62aad91c7/diff:/var/lib/docker/overlay2/c3209d71eed94ed66b11ba3a7573c347d5c09a5a4fedb00418e52571abb2b5f9/diff:/var/lib/docker/overlay2/d6d32632e36023dec15e643fd41f77168442327cecdead87a47e70683cb0c660/diff:/var/lib/docker/overlay2/fd413b21f1f34027c98a1b4106f7e7db83e910860edd53ef838ac699659cd451/diff:/var/lib/docker/overlay2/9768e3244c157a7a12b8ec31507c1abc0fc806a9f325099262cfeb41e23a5fe1/diff:/var/lib/docker/overlay2/f65dda9d5b79903f5e68f6b9d4a59214d62f304da77faa7e148b90b402d98ca4/diff:/var/lib/docker/overlay2/b663c2dab6b5df7b606daa62e1df5e57f4a2503c2a9a19211f47359fd685dccf/diff:/var/lib/docker/overlay2/bbd620a7e494844db80a2bcd2fd6f170080c5273f1c33a576501214dd4475464/diff:/var/lib/docker/overlay2/23de36667695cec8ea0f4c98fc580213d356d39a3eba5aaab8b6ebeeb2b71596/diff:/var/lib/docker/overlay2/bafb3e8d91e83dcf824b6d5b3f56a67c483c42
723724b009a5aedf578351154f/diff:/var/lib/docker/overlay2/2798e8ce2f51dde257a9cf2dd800492f888fd02515c5f1de4133cc787ee12928/diff:/var/lib/docker/overlay2/e69a88f2dffdd2c7f72c45eaf2e3cd1a8772e0a2af22d6f34d2695bbda62b6e9/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220509085441-6723",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220509085441-6723/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220509085441-6723",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220509085441-6723",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220509085441-6723",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d9bdd6efabd7d19b0c4b1317d302d4bdff4657c9fa9a1027917f2a96a3ba380",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/4d9bdd6efabd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220509085441-6723": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9eeef2af4bd8",
	                        "kubernetes-upgrade-20220509085441-6723"
	                    ],
	                    "NetworkID": "ab3dc0e987463e0d1919f0f30610dba9f36f6956dd59c0979fddc98d5b653e0f",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0509 08:55:36.406088  195809 cli_runner.go:164] Run: docker logs --timestamps --details kubernetes-upgrade-20220509085441-6723
	I0509 08:55:36.447221  195809 errors.go:91] Postmortem logs ("docker logs --timestamps --details kubernetes-upgrade-20220509085441-6723"): -- stdout --
	2022-05-09T08:54:48.471260248Z  + userns=
	2022-05-09T08:54:48.471300634Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2022-05-09T08:54:48.474122330Z  + validate_userns
	2022-05-09T08:54:48.474168246Z  + [[ -z '' ]]
	2022-05-09T08:54:48.474173803Z  + return
	2022-05-09T08:54:48.474177777Z  + configure_containerd
	2022-05-09T08:54:48.474507815Z  ++ stat -f -c %T /kind
	2022-05-09T08:54:48.475796885Z  + [[ overlayfs == \z\f\s ]]
	2022-05-09T08:54:48.475826600Z  + configure_proxy
	2022-05-09T08:54:48.475831768Z  + mkdir -p /etc/systemd/system.conf.d/
	2022-05-09T08:54:48.477517626Z  + [[ ! -z '' ]]
	2022-05-09T08:54:48.478304723Z  + cat
	2022-05-09T08:54:48.479273664Z  + fix_kmsg
	2022-05-09T08:54:48.479288630Z  + [[ ! -e /dev/kmsg ]]
	2022-05-09T08:54:48.479292377Z  + fix_mount
	2022-05-09T08:54:48.479295903Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2022-05-09T08:54:48.479299826Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2022-05-09T08:54:48.479708646Z  ++ which mount
	2022-05-09T08:54:48.481144595Z  ++ which umount
	2022-05-09T08:54:48.481900596Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2022-05-09T08:54:48.488289063Z  ++ which mount
	2022-05-09T08:54:48.490367491Z  ++ which umount
	2022-05-09T08:54:48.490389164Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2022-05-09T08:54:48.492038039Z  +++ which mount
	2022-05-09T08:54:48.493067021Z  ++ stat -f -c %T /usr/bin/mount
	2022-05-09T08:54:48.497479312Z  + [[ overlayfs == \a\u\f\s ]]
	2022-05-09T08:54:48.497502256Z  + [[ -z '' ]]
	2022-05-09T08:54:48.497506840Z  + echo 'INFO: remounting /sys read-only'
	2022-05-09T08:54:48.497510298Z  INFO: remounting /sys read-only
	2022-05-09T08:54:48.497513782Z  + mount -o remount,ro /sys
	2022-05-09T08:54:48.499860533Z  + echo 'INFO: making mounts shared'
	2022-05-09T08:54:48.499970184Z  INFO: making mounts shared
	2022-05-09T08:54:48.500098327Z  + mount --make-rshared /
	2022-05-09T08:54:48.502999281Z  + retryable_fix_cgroup
	2022-05-09T08:54:48.503018220Z  ++ seq 0 10
	2022-05-09T08:54:48.503314614Z  + for i in $(seq 0 10)
	2022-05-09T08:54:48.503325876Z  + fix_cgroup
	2022-05-09T08:54:48.503418417Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2022-05-09T08:54:48.503435857Z  + echo 'INFO: detected cgroup v1'
	2022-05-09T08:54:48.503440024Z  INFO: detected cgroup v1
	2022-05-09T08:54:48.505852156Z  + echo 'INFO: fix cgroup mounts for all subsystems'
	2022-05-09T08:54:48.505874933Z  INFO: fix cgroup mounts for all subsystems
	2022-05-09T08:54:48.505893332Z  + local current_cgroup
	2022-05-09T08:54:48.505897149Z  ++ cut -d: -f3
	2022-05-09T08:54:48.505900717Z  ++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
	2022-05-09T08:54:48.506496662Z  + current_cgroup=/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.506514273Z  + local cgroup_subsystems
	2022-05-09T08:54:48.507099718Z  ++ findmnt -lun -o source,target -t cgroup
	2022-05-09T08:54:48.507378391Z  ++ grep /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.507587122Z  ++ awk '{print $2}'
	2022-05-09T08:54:48.510297098Z  + cgroup_subsystems='/sys/fs/cgroup/systemd
	2022-05-09T08:54:48.510313375Z  /sys/fs/cgroup/net_cls,net_prio
	2022-05-09T08:54:48.510317236Z  /sys/fs/cgroup/pids
	2022-05-09T08:54:48.510320633Z  /sys/fs/cgroup/blkio
	2022-05-09T08:54:48.510324192Z  /sys/fs/cgroup/cpu,cpuacct
	2022-05-09T08:54:48.510327641Z  /sys/fs/cgroup/freezer
	2022-05-09T08:54:48.510330968Z  /sys/fs/cgroup/rdma
	2022-05-09T08:54:48.510334467Z  /sys/fs/cgroup/memory
	2022-05-09T08:54:48.510338489Z  /sys/fs/cgroup/devices
	2022-05-09T08:54:48.510340858Z  /sys/fs/cgroup/cpuset
	2022-05-09T08:54:48.510342979Z  /sys/fs/cgroup/hugetlb
	2022-05-09T08:54:48.510345057Z  /sys/fs/cgroup/perf_event'
	2022-05-09T08:54:48.510347956Z  + local cgroup_mounts
	2022-05-09T08:54:48.512398069Z  ++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
	2022-05-09T08:54:48.512413387Z  + cgroup_mounts='/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:383 master:9 - cgroup cgroup
	2022-05-09T08:54:48.512418877Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:388 master:16 - cgroup cgroup
	2022-05-09T08:54:48.512423102Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:389 master:17 - cgroup cgroup
	2022-05-09T08:54:48.512427285Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:403 master:18 - cgroup cgroup
	2022-05-09T08:54:48.512431215Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:404 master:19 - cgroup cgroup
	2022-05-09T08:54:48.512438201Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:405 master:20 - cgroup cgroup
	2022-05-09T08:54:48.512442056Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:406 master:21 - cgroup cgroup
	2022-05-09T08:54:48.512458710Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:407 master:22 - cgroup cgroup
	2022-05-09T08:54:48.512462814Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:408 master:23 - cgroup cgroup
	2022-05-09T08:54:48.512466523Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:409 master:24 - cgroup cgroup
	2022-05-09T08:54:48.512470016Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:424 master:25 - cgroup cgroup
	2022-05-09T08:54:48.512475266Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:425 master:26 - cgroup cgroup'
	2022-05-09T08:54:48.512478724Z  + [[ -n /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:383 master:9 - cgroup cgroup
	2022-05-09T08:54:48.512482419Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:388 master:16 - cgroup cgroup
	2022-05-09T08:54:48.512486367Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:389 master:17 - cgroup cgroup
	2022-05-09T08:54:48.512490051Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:403 master:18 - cgroup cgroup
	2022-05-09T08:54:48.512493777Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:404 master:19 - cgroup cgroup
	2022-05-09T08:54:48.512497218Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:405 master:20 - cgroup cgroup
	2022-05-09T08:54:48.512500751Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:406 master:21 - cgroup cgroup
	2022-05-09T08:54:48.512504301Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:407 master:22 - cgroup cgroup
	2022-05-09T08:54:48.512507809Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:408 master:23 - cgroup cgroup
	2022-05-09T08:54:48.512511212Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:409 master:24 - cgroup cgroup
	2022-05-09T08:54:48.512515161Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:424 master:25 - cgroup cgroup
	2022-05-09T08:54:48.512517993Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:425 master:26 - cgroup cgroup ]]
	2022-05-09T08:54:48.512520323Z  + local mount_root
	2022-05-09T08:54:48.513098271Z  ++ head -n 1
	2022-05-09T08:54:48.513247350Z  ++ cut '-d ' -f1
	2022-05-09T08:54:48.514855693Z  + mount_root=/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.515628508Z  ++ echo '/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:383 master:9 - cgroup cgroup
	2022-05-09T08:54:48.515643884Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:388 master:16 - cgroup cgroup
	2022-05-09T08:54:48.515648480Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:389 master:17 - cgroup cgroup
	2022-05-09T08:54:48.515652395Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:403 master:18 - cgroup cgroup
	2022-05-09T08:54:48.515656142Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:404 master:19 - cgroup cgroup
	2022-05-09T08:54:48.515659817Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:405 master:20 - cgroup cgroup
	2022-05-09T08:54:48.515663264Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:406 master:21 - cgroup cgroup
	2022-05-09T08:54:48.515667045Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:407 master:22 - cgroup cgroup
	2022-05-09T08:54:48.515670224Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:408 master:23 - cgroup cgroup
	2022-05-09T08:54:48.515674419Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:409 master:24 - cgroup cgroup
	2022-05-09T08:54:48.515678065Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:424 master:25 - cgroup cgroup
	2022-05-09T08:54:48.515681567Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:425 master:26 - cgroup cgroup'
	2022-05-09T08:54:48.515685160Z  ++ cut '-d ' -f 2
	2022-05-09T08:54:48.516591908Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.516645425Z  + local target=/sys/fs/cgroup/systemd/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.516651252Z  + findmnt /sys/fs/cgroup/systemd/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.519106998Z  + mkdir -p /sys/fs/cgroup/systemd/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.520529803Z  + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.522379054Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.522395958Z  + local target=/sys/fs/cgroup/net_cls,net_prio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.522400516Z  + findmnt /sys/fs/cgroup/net_cls,net_prio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.524330806Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.525477673Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/net_cls,net_prio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.526982499Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.526999060Z  + local target=/sys/fs/cgroup/pids/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.527003545Z  + findmnt /sys/fs/cgroup/pids/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.528845140Z  + mkdir -p /sys/fs/cgroup/pids/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.529882718Z  + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.531050445Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.531070607Z  + local target=/sys/fs/cgroup/blkio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.531127184Z  + findmnt /sys/fs/cgroup/blkio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.533028373Z  + mkdir -p /sys/fs/cgroup/blkio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.534462055Z  + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.536092612Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.536110411Z  + local target=/sys/fs/cgroup/cpu,cpuacct/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.536115415Z  + findmnt /sys/fs/cgroup/cpu,cpuacct/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.538673739Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.539915877Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpu,cpuacct/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.541437087Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.541455190Z  + local target=/sys/fs/cgroup/freezer/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.541460132Z  + findmnt /sys/fs/cgroup/freezer/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.543391313Z  + mkdir -p /sys/fs/cgroup/freezer/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.545772098Z  + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.547248575Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.547267260Z  + local target=/sys/fs/cgroup/rdma/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.547272562Z  + findmnt /sys/fs/cgroup/rdma/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.549630691Z  + mkdir -p /sys/fs/cgroup/rdma/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.550641817Z  + mount --bind /sys/fs/cgroup/rdma /sys/fs/cgroup/rdma/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.551961871Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.551974887Z  + local target=/sys/fs/cgroup/memory/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.551979418Z  + findmnt /sys/fs/cgroup/memory/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.553965667Z  + mkdir -p /sys/fs/cgroup/memory/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.555098664Z  + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.556707022Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.556726328Z  + local target=/sys/fs/cgroup/devices/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.556730861Z  + findmnt /sys/fs/cgroup/devices/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.558608335Z  + mkdir -p /sys/fs/cgroup/devices/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.559691381Z  + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.561100946Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.561116387Z  + local target=/sys/fs/cgroup/cpuset/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.561120903Z  + findmnt /sys/fs/cgroup/cpuset/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.562920076Z  + mkdir -p /sys/fs/cgroup/cpuset/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.597307309Z  + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.598832008Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.598864762Z  + local target=/sys/fs/cgroup/hugetlb/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.598869311Z  + findmnt /sys/fs/cgroup/hugetlb/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.600653125Z  + mkdir -p /sys/fs/cgroup/hugetlb/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.601936277Z  + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.603327936Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.603345656Z  + local target=/sys/fs/cgroup/perf_event/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.603356206Z  + findmnt /sys/fs/cgroup/perf_event/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.605703733Z  + mkdir -p /sys/fs/cgroup/perf_event/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.606930655Z  + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.608352931Z  + mount --make-rprivate /sys/fs/cgroup
	2022-05-09T08:54:48.610192609Z  + echo '/sys/fs/cgroup/systemd
	2022-05-09T08:54:48.610206857Z  /sys/fs/cgroup/net_cls,net_prio
	2022-05-09T08:54:48.610220939Z  /sys/fs/cgroup/pids
	2022-05-09T08:54:48.610224594Z  /sys/fs/cgroup/blkio
	2022-05-09T08:54:48.610228030Z  /sys/fs/cgroup/cpu,cpuacct
	2022-05-09T08:54:48.610231475Z  /sys/fs/cgroup/freezer
	2022-05-09T08:54:48.610234947Z  /sys/fs/cgroup/rdma
	2022-05-09T08:54:48.610238274Z  /sys/fs/cgroup/memory
	2022-05-09T08:54:48.610252110Z  /sys/fs/cgroup/devices
	2022-05-09T08:54:48.610255784Z  /sys/fs/cgroup/cpuset
	2022-05-09T08:54:48.610259138Z  /sys/fs/cgroup/hugetlb
	2022-05-09T08:54:48.610262284Z  /sys/fs/cgroup/perf_event'
	2022-05-09T08:54:48.610272595Z  + IFS=
	2022-05-09T08:54:48.610276540Z  + read -r subsystem
	2022-05-09T08:54:48.610371307Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd
	2022-05-09T08:54:48.610393767Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.610447245Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-05-09T08:54:48.610464114Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.610513753Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet
	2022-05-09T08:54:48.611508052Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.611534686Z  + mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet
	2022-05-09T08:54:48.612962535Z  + IFS=
	2022-05-09T08:54:48.612979492Z  + read -r subsystem
	2022-05-09T08:54:48.612985176Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls,net_prio
	2022-05-09T08:54:48.613001130Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.613006620Z  + local subsystem=/sys/fs/cgroup/net_cls,net_prio
	2022-05-09T08:54:48.613245064Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.613257642Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-05-09T08:54:48.614415224Z  + '[' /sys/fs/cgroup/net_cls,net_prio == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.614430406Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio//kubelet /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-05-09T08:54:48.616087283Z  + IFS=
	2022-05-09T08:54:48.616101283Z  + read -r subsystem
	2022-05-09T08:54:48.616105704Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
	2022-05-09T08:54:48.616108868Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.616141544Z  + local subsystem=/sys/fs/cgroup/pids
	2022-05-09T08:54:48.616155294Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.616159239Z  + mkdir -p /sys/fs/cgroup/pids//kubelet
	2022-05-09T08:54:48.617284882Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.617303449Z  + mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
	2022-05-09T08:54:48.618668985Z  + IFS=
	2022-05-09T08:54:48.618683956Z  + read -r subsystem
	2022-05-09T08:54:48.618688310Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
	2022-05-09T08:54:48.618691995Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.618728923Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-05-09T08:54:48.618750290Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.618753462Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet
	2022-05-09T08:54:48.619763460Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.619770759Z  + mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
	2022-05-09T08:54:48.621114013Z  + IFS=
	2022-05-09T08:54:48.621127858Z  + read -r subsystem
	2022-05-09T08:54:48.621131952Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu,cpuacct
	2022-05-09T08:54:48.621135451Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.621139093Z  + local subsystem=/sys/fs/cgroup/cpu,cpuacct
	2022-05-09T08:54:48.621191001Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.621204858Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-05-09T08:54:48.622435221Z  + '[' /sys/fs/cgroup/cpu,cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.622450554Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct//kubelet /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-05-09T08:54:48.623904570Z  + IFS=
	2022-05-09T08:54:48.623919652Z  + read -r subsystem
	2022-05-09T08:54:48.623924396Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
	2022-05-09T08:54:48.623954773Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.623959048Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-05-09T08:54:48.623962278Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.623965670Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet
	2022-05-09T08:54:48.625029116Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.625046744Z  + mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
	2022-05-09T08:54:48.626317951Z  + IFS=
	2022-05-09T08:54:48.626332395Z  + read -r subsystem
	2022-05-09T08:54:48.626336767Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/rdma
	2022-05-09T08:54:48.626340462Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.626343634Z  + local subsystem=/sys/fs/cgroup/rdma
	2022-05-09T08:54:48.626346895Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.626350010Z  + mkdir -p /sys/fs/cgroup/rdma//kubelet
	2022-05-09T08:54:48.627496152Z  + '[' /sys/fs/cgroup/rdma == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.627511352Z  + mount --bind /sys/fs/cgroup/rdma//kubelet /sys/fs/cgroup/rdma//kubelet
	2022-05-09T08:54:48.628631281Z  + IFS=
	2022-05-09T08:54:48.628647043Z  + read -r subsystem
	2022-05-09T08:54:48.628650828Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
	2022-05-09T08:54:48.628653940Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.628693466Z  + local subsystem=/sys/fs/cgroup/memory
	2022-05-09T08:54:48.628698453Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.628701999Z  + mkdir -p /sys/fs/cgroup/memory//kubelet
	2022-05-09T08:54:48.632741012Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.632758647Z  + mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
	2022-05-09T08:54:48.632761680Z  + IFS=
	2022-05-09T08:54:48.632763932Z  + read -r subsystem
	2022-05-09T08:54:48.632766097Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
	2022-05-09T08:54:48.632768604Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.632770748Z  + local subsystem=/sys/fs/cgroup/devices
	2022-05-09T08:54:48.632772777Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.632774857Z  + mkdir -p /sys/fs/cgroup/devices//kubelet
	2022-05-09T08:54:48.632776920Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.632779220Z  + mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
	2022-05-09T08:54:48.633518582Z  + IFS=
	2022-05-09T08:54:48.633533342Z  + read -r subsystem
	2022-05-09T08:54:48.633537608Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
	2022-05-09T08:54:48.633554800Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.633557325Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-05-09T08:54:48.633559354Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.633561358Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet
	2022-05-09T08:54:48.634676883Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.634691047Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-05-09T08:54:48.635753652Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-05-09T08:54:48.636819371Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
	2022-05-09T08:54:48.638369533Z  + IFS=
	2022-05-09T08:54:48.638382618Z  + read -r subsystem
	2022-05-09T08:54:48.638386744Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
	2022-05-09T08:54:48.638480846Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.638497058Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-05-09T08:54:48.638501381Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.638504676Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet
	2022-05-09T08:54:48.639580000Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.639593873Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
	2022-05-09T08:54:48.641233270Z  + IFS=
	2022-05-09T08:54:48.641249737Z  + read -r subsystem
	2022-05-09T08:54:48.641254116Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
	2022-05-09T08:54:48.641257739Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.641261106Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-05-09T08:54:48.641299852Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.641312761Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet
	2022-05-09T08:54:48.642292336Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.642300239Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
	2022-05-09T08:54:48.643522811Z  + IFS=
	2022-05-09T08:54:48.643537566Z  + read -r subsystem
	2022-05-09T08:54:48.643924011Z  + return
	2022-05-09T08:54:48.643944798Z  + fix_machine_id
	2022-05-09T08:54:48.643948943Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2022-05-09T08:54:48.643984496Z  INFO: clearing and regenerating /etc/machine-id
	2022-05-09T08:54:48.643996949Z  + rm -f /etc/machine-id
	2022-05-09T08:54:48.644897924Z  + systemd-machine-id-setup
	2022-05-09T08:54:48.648696420Z  Initializing machine ID from D-Bus machine ID.
	2022-05-09T08:54:48.652007275Z  + fix_product_name
	2022-05-09T08:54:48.652029499Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2022-05-09T08:54:48.652166407Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2022-05-09T08:54:48.652188535Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2022-05-09T08:54:48.652193280Z  + echo kind
	2022-05-09T08:54:48.652317652Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2022-05-09T08:54:48.654576894Z  + fix_product_uuid
	2022-05-09T08:54:48.654920962Z  + [[ ! -f /kind/product_uuid ]]
	2022-05-09T08:54:48.654936501Z  + cat /proc/sys/kernel/random/uuid
	2022-05-09T08:54:48.656224502Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2022-05-09T08:54:48.656480423Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2022-05-09T08:54:48.656811969Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2022-05-09T08:54:48.657012896Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2022-05-09T08:54:48.658849907Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2022-05-09T08:54:48.658922027Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2022-05-09T08:54:48.658928802Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2022-05-09T08:54:48.658933089Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2022-05-09T08:54:48.660598758Z  + select_iptables
	2022-05-09T08:54:48.660640772Z  + local mode=nft
	2022-05-09T08:54:48.662524997Z  ++ wc -l
	2022-05-09T08:54:48.668976711Z  ++ grep '^-'
	2022-05-09T08:54:48.670423709Z  + num_legacy_lines=6
	2022-05-09T08:54:48.670444339Z  + '[' 6 -ge 10 ']'
	2022-05-09T08:54:48.671409455Z  ++ grep '^-'
	2022-05-09T08:54:48.671472590Z  ++ wc -l
	2022-05-09T08:54:48.676259391Z  ++ true
	2022-05-09T08:54:48.676528455Z  + num_nft_lines=0
	2022-05-09T08:54:48.676641878Z  + '[' 6 -ge 0 ']'
	2022-05-09T08:54:48.676654457Z  + mode=legacy
	2022-05-09T08:54:48.676659525Z  + echo 'INFO: setting iptables to detected mode: legacy'
	2022-05-09T08:54:48.676663297Z  INFO: setting iptables to detected mode: legacy
	2022-05-09T08:54:48.676668168Z  + update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-05-09T08:54:48.676711079Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
	2022-05-09T08:54:48.676716691Z  + local 'args=--set iptables /usr/sbin/iptables-legacy'
	2022-05-09T08:54:48.677129110Z  ++ seq 0 15
	2022-05-09T08:54:48.677887760Z  + for i in $(seq 0 15)
	2022-05-09T08:54:48.677923378Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-05-09T08:54:48.682231290Z  + return
	2022-05-09T08:54:48.682257261Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-05-09T08:54:48.682269494Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
	2022-05-09T08:54:48.682301012Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
	2022-05-09T08:54:48.682763162Z  ++ seq 0 15
	2022-05-09T08:54:48.683704961Z  + for i in $(seq 0 15)
	2022-05-09T08:54:48.683751511Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-05-09T08:54:48.687089186Z  + return
	2022-05-09T08:54:48.687110450Z  + enable_network_magic
	2022-05-09T08:54:48.687155319Z  + local docker_embedded_dns_ip=127.0.0.11
	2022-05-09T08:54:48.687171375Z  + local docker_host_ip
	2022-05-09T08:54:48.688428728Z  ++ cut '-d ' -f1
	2022-05-09T08:54:48.688441794Z  ++ head -n1 /dev/fd/63
	2022-05-09T08:54:48.688541379Z  +++ getent ahostsv4 host.docker.internal
	2022-05-09T08:54:48.704892876Z  + docker_host_ip=
	2022-05-09T08:54:48.704927706Z  + [[ -z '' ]]
	2022-05-09T08:54:48.705672035Z  ++ ip -4 route show default
	2022-05-09T08:54:48.705697608Z  ++ cut '-d ' -f3
	2022-05-09T08:54:48.707997158Z  + docker_host_ip=192.168.58.1
	2022-05-09T08:54:48.708291965Z  + iptables-save
	2022-05-09T08:54:48.708673076Z  + iptables-restore
	2022-05-09T08:54:48.710278828Z  + sed -e 's/-d 127.0.0.11/-d 192.168.58.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.58.1:53/g'
	2022-05-09T08:54:48.713519355Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2022-05-09T08:54:48.715214009Z  + sed -e s/127.0.0.11/192.168.58.1/g /etc/resolv.conf.original
	2022-05-09T08:54:48.721381801Z  ++ cut '-d ' -f1
	2022-05-09T08:54:48.721495582Z  ++ head -n1 /dev/fd/63
	2022-05-09T08:54:48.722167585Z  ++++ hostname
	2022-05-09T08:54:48.722925563Z  +++ getent ahostsv4 kubernetes-upgrade-20220509085441-6723
	2022-05-09T08:54:48.725827211Z  + curr_ipv4=192.168.58.2
	2022-05-09T08:54:48.725843524Z  + echo 'INFO: Detected IPv4 address: 192.168.58.2'
	2022-05-09T08:54:48.725847584Z  INFO: Detected IPv4 address: 192.168.58.2
	2022-05-09T08:54:48.725850354Z  + '[' -f /kind/old-ipv4 ']'
	2022-05-09T08:54:48.725963468Z  + [[ -n 192.168.58.2 ]]
	2022-05-09T08:54:48.725978216Z  + echo -n 192.168.58.2
	2022-05-09T08:54:48.727205326Z  ++ cut '-d ' -f1
	2022-05-09T08:54:48.727336070Z  ++ head -n1 /dev/fd/63
	2022-05-09T08:54:48.728589155Z  ++++ hostname
	2022-05-09T08:54:48.729390159Z  +++ getent ahostsv6 kubernetes-upgrade-20220509085441-6723
	2022-05-09T08:54:48.731247255Z  + curr_ipv6=
	2022-05-09T08:54:48.731361196Z  + echo 'INFO: Detected IPv6 address: '
	2022-05-09T08:54:48.731538731Z  INFO: Detected IPv6 address: 
	2022-05-09T08:54:48.731559946Z  + '[' -f /kind/old-ipv6 ']'
	2022-05-09T08:54:48.731563664Z  + [[ -n '' ]]
	2022-05-09T08:54:48.732112224Z  ++ uname -a
	2022-05-09T08:54:48.732929561Z  + echo 'entrypoint completed: Linux kubernetes-upgrade-20220509085441-6723 5.13.0-1024-gcp #29~20.04.1-Ubuntu SMP Thu Apr 14 23:15:00 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux'
	2022-05-09T08:54:48.733021973Z  entrypoint completed: Linux kubernetes-upgrade-20220509085441-6723 5.13.0-1024-gcp #29~20.04.1-Ubuntu SMP Thu Apr 14 23:15:00 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	2022-05-09T08:54:48.733125482Z  + exec /sbin/init
	2022-05-09T08:54:48.741524396Z  systemd 245.4-4ubuntu3.15 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
	2022-05-09T08:54:48.741551150Z  Detected virtualization docker.
	2022-05-09T08:54:48.741554542Z  Detected architecture x86-64.
	2022-05-09T08:54:48.742168314Z  
	2022-05-09T08:54:48.742187397Z  Welcome to Ubuntu 20.04.4 LTS!
	2022-05-09T08:54:48.742192333Z  
	2022-05-09T08:54:48.742247475Z  Set hostname to <kubernetes-upgrade-20220509085441-6723>.
	2022-05-09T08:54:48.800205113Z  [  OK  ] Started Dispatch Password …ts to Console Directory Watch.
	2022-05-09T08:54:48.800438379Z  [  OK  ] Set up automount Arbitrary…s File System Automount Point.
	2022-05-09T08:54:48.800459193Z  [  OK  ] Reached target Local Encrypted Volumes.
	2022-05-09T08:54:48.800464673Z  [  OK  ] Reached target Network is Online.
	2022-05-09T08:54:48.800583296Z  [  OK  ] Reached target Paths.
	2022-05-09T08:54:48.800589856Z  [  OK  ] Reached target Slices.
	2022-05-09T08:54:48.800594456Z  [  OK  ] Reached target Swap.
	2022-05-09T08:54:48.800966471Z  [  OK  ] Listening on Journal Audit Socket.
	2022-05-09T08:54:48.801111687Z  [  OK  ] Listening on Journal Socket (/dev/log).
	2022-05-09T08:54:48.801292328Z  [  OK  ] Listening on Journal Socket.
	2022-05-09T08:54:48.803726050Z           Mounting Huge Pages File System...
	2022-05-09T08:54:48.805730665Z           Mounting Kernel Debug File System...
	2022-05-09T08:54:48.809219084Z           Mounting Kernel Trace File System...
	2022-05-09T08:54:48.811132246Z           Starting Journal Service...
	2022-05-09T08:54:48.851486030Z           Starting Create list of st…odes for the current kernel...
	2022-05-09T08:54:48.851541489Z           Mounting FUSE Control File System...
	2022-05-09T08:54:48.853491712Z           Starting Remount Root and Kernel File Systems...
	2022-05-09T08:54:48.863171459Z           Starting Apply Kernel Variables...
	2022-05-09T08:54:48.864697038Z  [  OK  ] Started Journal Service.
	2022-05-09T08:54:48.865160056Z  [  OK  ] Mounted Huge Pages File System.
	2022-05-09T08:54:48.865336584Z  [  OK  ] Mounted Kernel Debug File System.
	2022-05-09T08:54:48.865446089Z  [  OK  ] Mounted Kernel Trace File System.
	2022-05-09T08:54:48.866423358Z  [  OK  ] Finished Create list of st… nodes for the current kernel.
	2022-05-09T08:54:48.868044100Z  [  OK  ] Mounted FUSE Control File System.
	2022-05-09T08:54:48.868058259Z  [  OK  ] Finished Remount Root and Kernel File Systems.
	2022-05-09T08:54:48.869309096Z           Starting Flush Journal to Persistent Storage...
	2022-05-09T08:54:48.870576065Z           Starting Create System Users...
	2022-05-09T08:54:48.872015600Z           Starting Update UTMP about System Boot/Shutdown...
	2022-05-09T08:54:48.874644249Z  [  OK  ] Finished Apply Kernel Variables.
	2022-05-09T08:54:48.876972947Z  [  OK  ] Finished Flush Journal to Persistent Storage.
	2022-05-09T08:54:48.884175971Z  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
	2022-05-09T08:54:48.894060711Z  [  OK  ] Finished Create System Users.
	2022-05-09T08:54:48.895689953Z           Starting Create Static Device Nodes in /dev...
	2022-05-09T08:54:48.902614909Z  [  OK  ] Finished Create Static Device Nodes in /dev.
	2022-05-09T08:54:48.902714982Z  [  OK  ] Reached target Local File Systems (Pre).
	2022-05-09T08:54:48.902867181Z  [  OK  ] Reached target Local File Systems.
	2022-05-09T08:54:48.903012035Z  [  OK  ] Reached target System Initialization.
	2022-05-09T08:54:48.903032663Z  [  OK  ] Started Daily Cleanup of Temporary Directories.
	2022-05-09T08:54:48.903077708Z  [  OK  ] Reached target Timers.
	2022-05-09T08:54:48.903254013Z  [  OK  ] Listening on BuildKit.
	2022-05-09T08:54:48.903375631Z  [  OK  ] Listening on D-Bus System Message Bus Socket.
	2022-05-09T08:54:48.904730907Z           Starting Docker Socket for the API.
	2022-05-09T08:54:48.907043709Z           Starting Podman API Socket.
	2022-05-09T08:54:48.907454928Z  [  OK  ] Listening on Docker Socket for the API.
	2022-05-09T08:54:48.908446763Z  [  OK  ] Listening on Podman API Socket.
	2022-05-09T08:54:48.908461725Z  [  OK  ] Reached target Sockets.
	2022-05-09T08:54:48.908467114Z  [  OK  ] Reached target Basic System.
	2022-05-09T08:54:48.909647516Z           Starting containerd container runtime...
	2022-05-09T08:54:48.911079878Z  [  OK  ] Started D-Bus System Message Bus.
	2022-05-09T08:54:48.914052209Z           Starting minikube automount...
	2022-05-09T08:54:48.915380293Z           Starting OpenBSD Secure Shell server...
	2022-05-09T08:54:48.942091320Z  [  OK  ] Finished minikube automount.
	2022-05-09T08:54:48.980803941Z  [  OK  ] Started OpenBSD Secure Shell server.
	2022-05-09T08:54:49.025097606Z  [  OK  ] Started containerd container runtime.
	2022-05-09T08:54:49.028943848Z           Starting Docker Application Container Engine...
	2022-05-09T08:54:49.388449759Z  [  OK  ] Started Docker Application Container Engine.
	2022-05-09T08:54:49.388489289Z  [  OK  ] Reached target Multi-User System.
	2022-05-09T08:54:49.388495868Z  [  OK  ] Reached target Graphical Interface.
	2022-05-09T08:54:49.390075201Z           Starting Update UTMP about System Runlevel Changes...
	2022-05-09T08:54:49.399372972Z  [  OK  ] Finished Update UTMP about System Runlevel Changes.
	2022-05-09T08:55:24.515684789Z  [  OK  ] Stopped target Graphical Interface.
	2022-05-09T08:55:24.515867703Z  [  OK  ] Stopped target Multi-User System.
	2022-05-09T08:55:24.516069122Z  [  OK  ] Stopped target Timers.
	2022-05-09T08:55:24.516232339Z  [  OK  ] Stopped Daily Cleanup of Temporary Directories.
	2022-05-09T08:55:24.519316635Z           Stopping D-Bus System Message Bus...
	2022-05-09T08:55:24.519582766Z           Stopping Docker Application Container Engine...
	2022-05-09T08:55:24.519843067Z           Stopping kubelet: The Kubernetes Node Agent...
	2022-05-09T08:55:24.520056189Z           Stopping OpenBSD Secure Shell server...
	2022-05-09T08:55:24.521788264Z  [  OK  ] Stopped D-Bus System Message Bus.
	2022-05-09T08:55:24.523074666Z  [  OK  ] Stopped OpenBSD Secure Shell server.
	2022-05-09T08:55:24.566159260Z  [  OK  ] Stopped kubelet: The Kubernetes Node Agent.
	2022-05-09T08:55:24.869124769Z  [  OK  ] Unmounted /var/lib/docker/…ee005ef76b2749893a3738/merged.
	2022-05-09T08:55:24.893760964Z  [  OK  ] Unmounted /var/lib/docker/…a0eb1c8e5d2467b89a7e07/merged.
	2022-05-09T08:55:24.910495067Z  [  OK  ] Unmounted /var/lib/docker/…538a4f4e67e2c7821c1139/merged.
	2022-05-09T08:55:24.915653406Z  [  OK  ] Unmounted /var/lib/docker/…050dba3b2923467dbe065d/merged.
	2022-05-09T08:55:24.919648050Z  [  OK  ] Unmounted /var/lib/docker/…ffae74027356e06d4f50f9/merged.
	2022-05-09T08:55:24.920955023Z  [  OK  ] Unmounted /var/lib/docker/…6b32c73da71531b6a3/mounts/shm.
	2022-05-09T08:55:24.921002151Z  [  OK  ] Unmounted /var/lib/docker/…c64a6ca6534a56da9795ba/merged.
	2022-05-09T08:55:24.928125021Z  [  OK  ] Unmounted /var/lib/docker/…8835f4b20ae9f34d9a/mounts/shm.
	2022-05-09T08:55:24.928151735Z  [  OK  ] Unmounted /var/lib/docker/…39f73d9ad064762af314d9/merged.
	2022-05-09T08:55:24.965615143Z  [  OK  ] Unmounted /var/lib/docker/…46f3b6963233d8fc86/mounts/shm.
	2022-05-09T08:55:24.966643475Z  [  OK  ] Unmounted /var/lib/docker/…63c55b66636b13c700df78/merged.
	2022-05-09T08:55:24.971831772Z  [  OK  ] Unmounted /var/lib/docker/…9a93d5503632eea0bd/mounts/shm.
	2022-05-09T08:55:24.973331139Z  [  OK  ] Unmounted /var/lib/docker/…fb16684640e9f42e2d6fa8/merged.
	2022-05-09T08:55:24.981349077Z  [  OK  ] Unmounted /var/lib/docker/…1995469e919e04d7a8/mounts/shm.
	2022-05-09T08:55:24.981464910Z  [  OK  ] Unmounted /var/lib/docker/…484bcb070061b068af436c/merged.
	2022-05-09T08:55:24.981636453Z  [  OK  ] Unmounted /var/lib/docker/…ecd58c34f93cbafa82/mounts/shm.
	2022-05-09T08:55:24.984855002Z  [  OK  ] Unmounted /var/lib/docker/…9550b71290b8c9796e0e98/merged.
	2022-05-09T08:55:27.946224275Z  [*     ] A stop job is running for Docker Ap…n Container Engine (1s / 1min 28s)
	2022-05-09T08:55:28.446217891Z  M
[**    ] A stop job is running for Docker Ap…n Container Engine (2s / 1min 28s)
	2022-05-09T08:55:28.946206312Z  M
[***   ] A stop job is running for Docker Ap…n Container Engine (2s / 1min 28s)
	2022-05-09T08:55:29.446197101Z  M
[ ***  ] A stop job is running for Docker Ap…n Container Engine (3s / 1min 28s)
	2022-05-09T08:55:29.946174641Z  M
[  *** ] A stop job is running for Docker Ap…n Container Engine (3s / 1min 28s)
	2022-05-09T08:55:30.446202226Z  M
[   ***] A stop job is running for Docker Ap…n Container Engine (4s / 1min 28s)
	2022-05-09T08:55:30.946343866Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (4s / 1min 28s)
	2022-05-09T08:55:31.446240569Z  M
[     *] A stop job is running for Docker Ap…n Container Engine (5s / 1min 28s)
	2022-05-09T08:55:31.946210252Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (5s / 1min 28s)
	2022-05-09T08:55:32.446244845Z  M
[   ***] A stop job is running for Docker Ap…n Container Engine (6s / 1min 28s)
	2022-05-09T08:55:32.946266274Z  M
[  *** ] A stop job is running for Docker Ap…n Container Engine (6s / 1min 28s)
	2022-05-09T08:55:33.446269782Z  M
[ ***  ] A stop job is running for Docker Ap…n Container Engine (7s / 1min 28s)
	2022-05-09T08:55:33.946217519Z  M
[***   ] A stop job is running for Docker Ap…n Container Engine (7s / 1min 28s)
	2022-05-09T08:55:34.446221269Z  M
[**    ] A stop job is running for Docker Ap…n Container Engine (8s / 1min 28s)
	2022-05-09T08:55:34.741819053Z  M
[  OK  ] Unmounted /var/lib/docker/…4d90fcc19c9dd2185e8206/merged.
	2022-05-09T08:55:34.762903828Z  [  OK  ] Stopped Docker Application Container Engine.
	2022-05-09T08:55:34.763006369Z  [  OK  ] Stopped target Network is Online.
	2022-05-09T08:55:34.763073811Z           Stopping containerd container runtime...
	2022-05-09T08:55:34.763801528Z  [  OK  ] Stopped minikube automount.
	2022-05-09T08:55:34.767383402Z  [  OK  ] Stopped containerd container runtime.
	2022-05-09T08:55:34.767535024Z  [  OK  ] Stopped target Basic System.
	2022-05-09T08:55:34.767551301Z  [  OK  ] Stopped target Paths.
	2022-05-09T08:55:34.767613735Z  [  OK  ] Stopped target Slices.
	2022-05-09T08:55:34.767641754Z  [  OK  ] Stopped target Sockets.
	2022-05-09T08:55:34.768262704Z  [  OK  ] Closed BuildKit.
	2022-05-09T08:55:34.768843514Z  [  OK  ] Closed D-Bus System Message Bus Socket.
	2022-05-09T08:55:34.769371617Z  [  OK  ] Closed Docker Socket for the API.
	2022-05-09T08:55:34.769908808Z  [  OK  ] Closed Podman API Socket.
	2022-05-09T08:55:34.769915903Z  [  OK  ] Stopped target System Initialization.
	2022-05-09T08:55:34.769948889Z  [  OK  ] Stopped target Local Encrypted Volumes.
	2022-05-09T08:55:34.785642278Z  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
	2022-05-09T08:55:34.785987158Z  [  OK  ] Stopped target Local File Systems.
	2022-05-09T08:55:34.787006032Z           Unmounting /data...
	2022-05-09T08:55:34.790417016Z           Unmounting /etc/hostname...
	2022-05-09T08:55:34.790441045Z           Unmounting /etc/hosts...
	2022-05-09T08:55:34.790445391Z           Unmounting /etc/resolv.conf...
	2022-05-09T08:55:34.790640208Z           Unmounting /kind/product_uuid...
	2022-05-09T08:55:34.792699462Z           Unmounting /run/docker/netns/default...
	2022-05-09T08:55:34.794912239Z           Unmounting /tmp/hostpath-provisioner...
	2022-05-09T08:55:34.797678077Z           Unmounting /tmp/hostpath_pv...
	2022-05-09T08:55:34.798619944Z           Unmounting /usr/lib/modules...
	2022-05-09T08:55:34.800275815Z           Unmounting /var/lib/kubele…cret/kube-proxy-token-cgwjt...
	2022-05-09T08:55:34.802011240Z           Unmounting /var/lib/kubele…~secret/coredns-token-6fhnf...
	2022-05-09T08:55:34.803417213Z           Unmounting /var/lib/kubele…age-provisioner-token-p8vbs...
	2022-05-09T08:55:34.804104710Z  [  OK  ] Stopped Apply Kernel Variables.
	2022-05-09T08:55:34.804963727Z           Stopping Update UTMP about System Boot/Shutdown...
	2022-05-09T08:55:34.808097022Z  [  OK  ] Unmounted /data.
	2022-05-09T08:55:34.808887823Z  [  OK  ] Unmounted /etc/hostname.
	2022-05-09T08:55:34.809561466Z  [  OK  ] Unmounted /etc/hosts.
	2022-05-09T08:55:34.810423733Z  [  OK  ] Unmounted /etc/resolv.conf.
	2022-05-09T08:55:34.811154862Z  [  OK  ] Unmounted /kind/product_uuid.
	2022-05-09T08:55:34.811897053Z  [  OK  ] Unmounted /run/docker/netns/default.
	2022-05-09T08:55:34.812589174Z  [  OK  ] Unmounted /tmp/hostpath-provisioner.
	2022-05-09T08:55:34.813609878Z  [  OK  ] Unmounted /tmp/hostpath_pv.
	2022-05-09T08:55:34.814478125Z  [  OK  ] Unmounted /usr/lib/modules.
	2022-05-09T08:55:34.815328236Z  [  OK  ] Unmounted /var/lib/kubelet…secret/kube-proxy-token-cgwjt.
	2022-05-09T08:55:34.816365714Z  [  OK  ] Unmounted /var/lib/kubelet…io~secret/coredns-token-6fhnf.
	2022-05-09T08:55:34.817679811Z  [  OK  ] Unmounted /var/lib/kubelet…orage-provisioner-token-p8vbs.
	2022-05-09T08:55:34.820457428Z           Unmounting /tmp...
	2022-05-09T08:55:34.824128102Z  [  OK  ] Stopped Update UTMP about System Boot/Shutdown.
	2022-05-09T08:55:34.825867452Z  [  OK  ] Unmounted /tmp.
	2022-05-09T08:55:34.826089373Z  [  OK  ] Stopped target Swap.
	2022-05-09T08:55:34.826902988Z           Unmounting /var...
	2022-05-09T08:55:34.831108502Z  [  OK  ] Unmounted /var.
	2022-05-09T08:55:34.831294727Z  [  OK  ] Stopped target Local File Systems (Pre).
	2022-05-09T08:55:34.831375743Z  [  OK  ] Reached target Unmount All Filesystems.
	2022-05-09T08:55:34.832107670Z  [  OK  ] Stopped Create Static Device Nodes in /dev.
	2022-05-09T08:55:34.832891870Z  [  OK  ] Stopped Create System Users.
	2022-05-09T08:55:34.833667869Z  [  OK  ] Stopped Remount Root and Kernel File Systems.
	2022-05-09T08:55:34.833743159Z  [  OK  ] Reached target Shutdown.
	2022-05-09T08:55:34.833751217Z  [  OK  ] Reached target Final Step.
	2022-05-09T08:55:34.838833342Z           Starting Halt...
	2022-05-09T08:55:34.838855197Z  [  OK  ] Finished Power-Off.
	2022-05-09T08:55:34.838861739Z  [  OK  ] Reached target Power-Off.
	
	-- /stdout --
	I0509 08:55:36.447350  195809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:55:36.577232  195809 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:59 SystemTime:2022-05-09 08:55:36.479101126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:36.577340  195809 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:59 SystemTime:2022-05-09 08:55:36.479101126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:36.577415  195809 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220509085441-6723] to gather additional debugging logs...
	I0509 08:55:36.577435  195809 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220509085441-6723
	W0509 08:55:36.634921  195809 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:36.634967  195809 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220509085441-6723]: docker network inspect kubernetes-upgrade-20220509085441-6723: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220509085441-6723
	I0509 08:55:36.635004  195809 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220509085441-6723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220509085441-6723
	
	** /stderr **
	I0509 08:55:36.635163  195809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:55:36.770942  195809 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:59 SystemTime:2022-05-09 08:55:36.670602141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:36.771352  195809 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220509085441-6723
	I0509 08:55:36.810854  195809 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kubernetes-upgrade-20220509085441-6723/config.json ...
	I0509 08:55:36.811131  195809 machine.go:88] provisioning docker machine ...
	I0509 08:55:36.811162  195809 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220509085441-6723"
	I0509 08:55:36.811224  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:36.846821  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:36.846903  195809 machine.go:91] provisioned docker machine in 35.754634ms
	I0509 08:55:36.846973  195809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 08:55:36.847024  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:36.884481  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:36.884633  195809 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:37.161092  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:37.197720  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:37.197839  195809 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:37.738568  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:37.777633  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:37.777742  195809 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:38.433573  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:38.473894  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	W0509 08:55:38.474014  195809 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0509 08:55:38.474063  195809 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:38.474108  195809 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0509 08:55:38.474160  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:38.531963  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:38.532083  195809 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:38.764376  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:38.798422  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:38.798520  195809 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:39.244144  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:39.277681  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:39.277802  195809 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:39.596203  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:39.640159  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:39.640310  195809 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:40.195180  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:40.229165  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	W0509 08:55:40.229273  195809 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0509 08:55:40.229300  195809 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:40.229326  195809 fix.go:57] fixHost completed within 4.091404079s
	I0509 08:55:40.229336  195809 start.go:81] releasing machines lock for "kubernetes-upgrade-20220509085441-6723", held for 4.091437904s
	W0509 08:55:40.229365  195809 start.go:576] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0509 08:55:40.229512  195809 out.go:239] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:40.229532  195809 start.go:591] Will try again in 5 seconds ...
	I0509 08:55:45.232716  195809 start.go:352] acquiring machines lock for kubernetes-upgrade-20220509085441-6723: {Name:mk4b33cce4d6cf1758fd22377d4a179c0c038c8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:55:45.232898  195809 start.go:356] acquired machines lock for "kubernetes-upgrade-20220509085441-6723" in 128.597µs
	I0509 08:55:45.232930  195809 start.go:94] Skipping create...Using existing machine configuration
	I0509 08:55:45.232936  195809 fix.go:55] fixHost starting: 
	I0509 08:55:45.233180  195809 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220509085441-6723 --format={{.State.Status}}
	I0509 08:55:45.265890  195809 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220509085441-6723: state=Stopped err=<nil>
	W0509 08:55:45.265931  195809 fix.go:129] unexpected machine state, will restart: <nil>
	I0509 08:55:45.643274  195809 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220509085441-6723" ...
	I0509 08:55:45.771788  195809 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220509085441-6723
	W0509 08:55:47.753393  195809 cli_runner.go:211] docker start kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:47.753426  195809 cli_runner.go:217] Completed: docker start kubernetes-upgrade-20220509085441-6723: (1.891165098s)
	I0509 08:55:47.753488  195809 cli_runner.go:164] Run: docker inspect kubernetes-upgrade-20220509085441-6723
	I0509 08:55:47.787646  195809 errors.go:84] Postmortem inspect ("docker inspect kubernetes-upgrade-20220509085441-6723"): -- stdout --
	[
	    {
	        "Id": "9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55",
	        "Created": "2022-05-09T08:54:48.053286897Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network ab3dc0e987463e0d1919f0f30610dba9f36f6956dd59c0979fddc98d5b653e0f not found",
	            "StartedAt": "2022-05-09T08:54:48.472807369Z",
	            "FinishedAt": "2022-05-09T08:55:34.933426699Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/hostname",
	        "HostsPath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/hosts",
	        "LogPath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55-json.log",
	        "Name": "/kubernetes-upgrade-20220509085441-6723",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-20220509085441-6723:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220509085441-6723",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866-init/diff:/var/lib/docker/overlay2/beaaca4c58fe6ff4bdb88567c3d78ab7a23955eafaa5637df03ee2e0482d2aa6/diff:/var/lib/docker/overlay2/7c16b810bbfa3a2abff75078fa37b4cba0b2f101ff43d49beaabc3fd2602b1c9/diff:/var/lib/docker/overlay2/60f04c0e4baa8ad1c02ae5e34e6f505db0d740e2d7dc0833b2ff3b8037c1a9b6/diff:/var/lib/docker/overlay2/a12543300ae4ff803b2f0493a60a04a921312ec5a7b6ed493e66acadf998daef/diff:/var/lib/docker/overlay2/2d68f658a64cd8b7255bce93547db5d1b20b119ef6da8b9ce06614134661f235/diff:/var/lib/docker/overlay2/0f968f210c1565f6e8c4e444c650502c06a120d8767c9fefb7b6b2a09f4af83b/diff:/var/lib/docker/overlay2/987a67893acdccd514357a50db6c11680ae1899ec07a36085367a241ba1f0545/diff:/var/lib/docker/overlay2/d5446d1adc007f4390aa6173ef257b0bf52a9c9cc6533f16dd6c577fde5334b3/diff:/var/lib/docker/overlay2/265dd0cde77d578f6412ad48596c75c3548b5e077059c0fa835e4f22775fab83/diff:/var/lib/docker/overlay2/e6b98b
08dd64e06639e2a321169f84a89e8d21cea02d673819156eeb7a4747c3/diff:/var/lib/docker/overlay2/acf7117613a840e97b7a1101bfdce3a154a335ee0ffcaf74bd7f1b27b00cdaab/diff:/var/lib/docker/overlay2/9e88fa64aa800dac6e01eddc9cfd525a0dcd906c1adf98de066be87f87a6b52c/diff:/var/lib/docker/overlay2/6c539edead197cd449ca81fd34e3c534a6ca25446a52141d0ae4484aaac05482/diff:/var/lib/docker/overlay2/866719c52f41a692b94f42071d314de55b290912f946f4ec5f3785a7a5ae40df/diff:/var/lib/docker/overlay2/1c488a9b3bad141652873f4085ce1a466e1f1e8ccbd086d03a492b45179a6064/diff:/var/lib/docker/overlay2/bce5b942da09326e5a5c8595077544ced08031b1cd9b22ff8bff0a4458540139/diff:/var/lib/docker/overlay2/ffe34c3764a4eeac791f4672d47389ec3f399f716ab6e5531d7c5e587f3ace00/diff:/var/lib/docker/overlay2/a377e0779d03467b09a26898ae35b12f325d62984304817f49c693b2146c08f4/diff:/var/lib/docker/overlay2/6092380f07b29488cf0a30cb486638d86eaa8e00ff356a7985c6ac6f2fae5c1d/diff:/var/lib/docker/overlay2/1bc014cc0cf0a91c61f131c06b0194709f853dec23defe9233cbd9cc40030c28/diff:/var/lib/d
ocker/overlay2/0c81c4db7384c48318a800165af7f811348a8081efdd9dbd912e05e55c9eb4e0/diff:/var/lib/docker/overlay2/72b0c515d90bc71e27b766a9be89e315777a5bbe643d8fc508a9ae12557a58ce/diff:/var/lib/docker/overlay2/a4d193bf8c377d4cf1357b9261d3c54d995f17bda3db5abee3ef5caec001d75e/diff:/var/lib/docker/overlay2/763c6d291074b0842ecdeec1f3842fd6a0af0cb86839c82bc38cec0f40d095ca/diff:/var/lib/docker/overlay2/e4156eb2a94ac7136eb674fb8c22ea7f6dff50cd81e4857119d81112dc5ad99d/diff:/var/lib/docker/overlay2/be7effa3bb906b8c48aefab3cc72e657931bccd42c35b03bb52c679c37c70d25/diff:/var/lib/docker/overlay2/ccf8c1a68774cc6129c43490c675e6ba0ff0c88284ad899c9efb4b7492e92a06/diff:/var/lib/docker/overlay2/55f74cddb5c8f2da1131ddd67f7f2a20b8c2be8719b71047d5185fdb4722627d/diff:/var/lib/docker/overlay2/f6c38e83545e9de87d03b1e8e9b0239079acf47784afc11291b2172a11f8296d/diff:/var/lib/docker/overlay2/24604ce83180fcb6b9e1adbaf37db6776e4949ca5e1f4f9050b8fb8dc87d7591/diff:/var/lib/docker/overlay2/b03b1952bea32e53ce88b0d92d09f78ca76ceca0a146b088628be224813
ae87c/diff:/var/lib/docker/overlay2/d7ee02a5ba355c246c3107f286c4130457358aa56e0d8feb3850d953812fe76d/diff:/var/lib/docker/overlay2/1223ae0e5105ae89c78524490f0ccc5fe6dfa373e175fa70474b06c62aad91c7/diff:/var/lib/docker/overlay2/c3209d71eed94ed66b11ba3a7573c347d5c09a5a4fedb00418e52571abb2b5f9/diff:/var/lib/docker/overlay2/d6d32632e36023dec15e643fd41f77168442327cecdead87a47e70683cb0c660/diff:/var/lib/docker/overlay2/fd413b21f1f34027c98a1b4106f7e7db83e910860edd53ef838ac699659cd451/diff:/var/lib/docker/overlay2/9768e3244c157a7a12b8ec31507c1abc0fc806a9f325099262cfeb41e23a5fe1/diff:/var/lib/docker/overlay2/f65dda9d5b79903f5e68f6b9d4a59214d62f304da77faa7e148b90b402d98ca4/diff:/var/lib/docker/overlay2/b663c2dab6b5df7b606daa62e1df5e57f4a2503c2a9a19211f47359fd685dccf/diff:/var/lib/docker/overlay2/bbd620a7e494844db80a2bcd2fd6f170080c5273f1c33a576501214dd4475464/diff:/var/lib/docker/overlay2/23de36667695cec8ea0f4c98fc580213d356d39a3eba5aaab8b6ebeeb2b71596/diff:/var/lib/docker/overlay2/bafb3e8d91e83dcf824b6d5b3f56a67c483c42
723724b009a5aedf578351154f/diff:/var/lib/docker/overlay2/2798e8ce2f51dde257a9cf2dd800492f888fd02515c5f1de4133cc787ee12928/diff:/var/lib/docker/overlay2/e69a88f2dffdd2c7f72c45eaf2e3cd1a8772e0a2af22d6f34d2695bbda62b6e9/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220509085441-6723",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220509085441-6723/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220509085441-6723",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220509085441-6723",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220509085441-6723",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d9bdd6efabd7d19b0c4b1317d302d4bdff4657c9fa9a1027917f2a96a3ba380",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/4d9bdd6efabd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220509085441-6723": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9eeef2af4bd8",
	                        "kubernetes-upgrade-20220509085441-6723"
	                    ],
	                    "NetworkID": "ab3dc0e987463e0d1919f0f30610dba9f36f6956dd59c0979fddc98d5b653e0f",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0509 08:55:47.787728  195809 cli_runner.go:164] Run: docker logs --timestamps --details kubernetes-upgrade-20220509085441-6723
	I0509 08:55:47.825514  195809 errors.go:91] Postmortem logs ("docker logs --timestamps --details kubernetes-upgrade-20220509085441-6723"): -- stdout --
	2022-05-09T08:54:48.471260248Z  + userns=
	2022-05-09T08:54:48.471300634Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2022-05-09T08:54:48.474122330Z  + validate_userns
	2022-05-09T08:54:48.474168246Z  + [[ -z '' ]]
	2022-05-09T08:54:48.474173803Z  + return
	2022-05-09T08:54:48.474177777Z  + configure_containerd
	2022-05-09T08:54:48.474507815Z  ++ stat -f -c %T /kind
	2022-05-09T08:54:48.475796885Z  + [[ overlayfs == \z\f\s ]]
	2022-05-09T08:54:48.475826600Z  + configure_proxy
	2022-05-09T08:54:48.475831768Z  + mkdir -p /etc/systemd/system.conf.d/
	2022-05-09T08:54:48.477517626Z  + [[ ! -z '' ]]
	2022-05-09T08:54:48.478304723Z  + cat
	2022-05-09T08:54:48.479273664Z  + fix_kmsg
	2022-05-09T08:54:48.479288630Z  + [[ ! -e /dev/kmsg ]]
	2022-05-09T08:54:48.479292377Z  + fix_mount
	2022-05-09T08:54:48.479295903Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2022-05-09T08:54:48.479299826Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2022-05-09T08:54:48.479708646Z  ++ which mount
	2022-05-09T08:54:48.481144595Z  ++ which umount
	2022-05-09T08:54:48.481900596Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2022-05-09T08:54:48.488289063Z  ++ which mount
	2022-05-09T08:54:48.490367491Z  ++ which umount
	2022-05-09T08:54:48.490389164Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2022-05-09T08:54:48.492038039Z  +++ which mount
	2022-05-09T08:54:48.493067021Z  ++ stat -f -c %T /usr/bin/mount
	2022-05-09T08:54:48.497479312Z  + [[ overlayfs == \a\u\f\s ]]
	2022-05-09T08:54:48.497502256Z  + [[ -z '' ]]
	2022-05-09T08:54:48.497506840Z  + echo 'INFO: remounting /sys read-only'
	2022-05-09T08:54:48.497510298Z  INFO: remounting /sys read-only
	2022-05-09T08:54:48.497513782Z  + mount -o remount,ro /sys
	2022-05-09T08:54:48.499860533Z  + echo 'INFO: making mounts shared'
	2022-05-09T08:54:48.499970184Z  INFO: making mounts shared
	2022-05-09T08:54:48.500098327Z  + mount --make-rshared /
	2022-05-09T08:54:48.502999281Z  + retryable_fix_cgroup
	2022-05-09T08:54:48.503018220Z  ++ seq 0 10
	2022-05-09T08:54:48.503314614Z  + for i in $(seq 0 10)
	2022-05-09T08:54:48.503325876Z  + fix_cgroup
	2022-05-09T08:54:48.503418417Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2022-05-09T08:54:48.503435857Z  + echo 'INFO: detected cgroup v1'
	2022-05-09T08:54:48.503440024Z  INFO: detected cgroup v1
	2022-05-09T08:54:48.505852156Z  + echo 'INFO: fix cgroup mounts for all subsystems'
	2022-05-09T08:54:48.505874933Z  INFO: fix cgroup mounts for all subsystems
	2022-05-09T08:54:48.505893332Z  + local current_cgroup
	2022-05-09T08:54:48.505897149Z  ++ cut -d: -f3
	2022-05-09T08:54:48.505900717Z  ++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
	2022-05-09T08:54:48.506496662Z  + current_cgroup=/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.506514273Z  + local cgroup_subsystems
	2022-05-09T08:54:48.507099718Z  ++ findmnt -lun -o source,target -t cgroup
	2022-05-09T08:54:48.507378391Z  ++ grep /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.507587122Z  ++ awk '{print $2}'
	2022-05-09T08:54:48.510297098Z  + cgroup_subsystems='/sys/fs/cgroup/systemd
	2022-05-09T08:54:48.510313375Z  /sys/fs/cgroup/net_cls,net_prio
	2022-05-09T08:54:48.510317236Z  /sys/fs/cgroup/pids
	2022-05-09T08:54:48.510320633Z  /sys/fs/cgroup/blkio
	2022-05-09T08:54:48.510324192Z  /sys/fs/cgroup/cpu,cpuacct
	2022-05-09T08:54:48.510327641Z  /sys/fs/cgroup/freezer
	2022-05-09T08:54:48.510330968Z  /sys/fs/cgroup/rdma
	2022-05-09T08:54:48.510334467Z  /sys/fs/cgroup/memory
	2022-05-09T08:54:48.510338489Z  /sys/fs/cgroup/devices
	2022-05-09T08:54:48.510340858Z  /sys/fs/cgroup/cpuset
	2022-05-09T08:54:48.510342979Z  /sys/fs/cgroup/hugetlb
	2022-05-09T08:54:48.510345057Z  /sys/fs/cgroup/perf_event'
	2022-05-09T08:54:48.510347956Z  + local cgroup_mounts
	2022-05-09T08:54:48.512398069Z  ++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
	2022-05-09T08:54:48.512413387Z  + cgroup_mounts='/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:383 master:9 - cgroup cgroup
	2022-05-09T08:54:48.512418877Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:388 master:16 - cgroup cgroup
	2022-05-09T08:54:48.512423102Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:389 master:17 - cgroup cgroup
	2022-05-09T08:54:48.512427285Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:403 master:18 - cgroup cgroup
	2022-05-09T08:54:48.512431215Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:404 master:19 - cgroup cgroup
	2022-05-09T08:54:48.512438201Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:405 master:20 - cgroup cgroup
	2022-05-09T08:54:48.512442056Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:406 master:21 - cgroup cgroup
	2022-05-09T08:54:48.512458710Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:407 master:22 - cgroup cgroup
	2022-05-09T08:54:48.512462814Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:408 master:23 - cgroup cgroup
	2022-05-09T08:54:48.512466523Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:409 master:24 - cgroup cgroup
	2022-05-09T08:54:48.512470016Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:424 master:25 - cgroup cgroup
	2022-05-09T08:54:48.512475266Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:425 master:26 - cgroup cgroup'
	2022-05-09T08:54:48.512478724Z  + [[ -n /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:383 master:9 - cgroup cgroup
	2022-05-09T08:54:48.512482419Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:388 master:16 - cgroup cgroup
	2022-05-09T08:54:48.512486367Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:389 master:17 - cgroup cgroup
	2022-05-09T08:54:48.512490051Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:403 master:18 - cgroup cgroup
	2022-05-09T08:54:48.512493777Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:404 master:19 - cgroup cgroup
	2022-05-09T08:54:48.512497218Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:405 master:20 - cgroup cgroup
	2022-05-09T08:54:48.512500751Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:406 master:21 - cgroup cgroup
	2022-05-09T08:54:48.512504301Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:407 master:22 - cgroup cgroup
	2022-05-09T08:54:48.512507809Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:408 master:23 - cgroup cgroup
	2022-05-09T08:54:48.512511212Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:409 master:24 - cgroup cgroup
	2022-05-09T08:54:48.512515161Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:424 master:25 - cgroup cgroup
	2022-05-09T08:54:48.512517993Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:425 master:26 - cgroup cgroup ]]
	2022-05-09T08:54:48.512520323Z  + local mount_root
	2022-05-09T08:54:48.513098271Z  ++ head -n 1
	2022-05-09T08:54:48.513247350Z  ++ cut '-d ' -f1
	2022-05-09T08:54:48.514855693Z  + mount_root=/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.515628508Z  ++ echo '/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:383 master:9 - cgroup cgroup
	2022-05-09T08:54:48.515643884Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:388 master:16 - cgroup cgroup
	2022-05-09T08:54:48.515648480Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:389 master:17 - cgroup cgroup
	2022-05-09T08:54:48.515652395Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:403 master:18 - cgroup cgroup
	2022-05-09T08:54:48.515656142Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:404 master:19 - cgroup cgroup
	2022-05-09T08:54:48.515659817Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:405 master:20 - cgroup cgroup
	2022-05-09T08:54:48.515663264Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:406 master:21 - cgroup cgroup
	2022-05-09T08:54:48.515667045Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:407 master:22 - cgroup cgroup
	2022-05-09T08:54:48.515670224Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:408 master:23 - cgroup cgroup
	2022-05-09T08:54:48.515674419Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:409 master:24 - cgroup cgroup
	2022-05-09T08:54:48.515678065Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:424 master:25 - cgroup cgroup
	2022-05-09T08:54:48.515681567Z  /docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:425 master:26 - cgroup cgroup'
	2022-05-09T08:54:48.515685160Z  ++ cut '-d ' -f 2
	2022-05-09T08:54:48.516591908Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.516645425Z  + local target=/sys/fs/cgroup/systemd/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.516651252Z  + findmnt /sys/fs/cgroup/systemd/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.519106998Z  + mkdir -p /sys/fs/cgroup/systemd/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.520529803Z  + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.522379054Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.522395958Z  + local target=/sys/fs/cgroup/net_cls,net_prio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.522400516Z  + findmnt /sys/fs/cgroup/net_cls,net_prio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.524330806Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.525477673Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/net_cls,net_prio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.526982499Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.526999060Z  + local target=/sys/fs/cgroup/pids/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.527003545Z  + findmnt /sys/fs/cgroup/pids/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.528845140Z  + mkdir -p /sys/fs/cgroup/pids/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.529882718Z  + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.531050445Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.531070607Z  + local target=/sys/fs/cgroup/blkio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.531127184Z  + findmnt /sys/fs/cgroup/blkio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.533028373Z  + mkdir -p /sys/fs/cgroup/blkio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.534462055Z  + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.536092612Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.536110411Z  + local target=/sys/fs/cgroup/cpu,cpuacct/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.536115415Z  + findmnt /sys/fs/cgroup/cpu,cpuacct/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.538673739Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.539915877Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpu,cpuacct/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.541437087Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.541455190Z  + local target=/sys/fs/cgroup/freezer/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.541460132Z  + findmnt /sys/fs/cgroup/freezer/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.543391313Z  + mkdir -p /sys/fs/cgroup/freezer/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.545772098Z  + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.547248575Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.547267260Z  + local target=/sys/fs/cgroup/rdma/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.547272562Z  + findmnt /sys/fs/cgroup/rdma/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.549630691Z  + mkdir -p /sys/fs/cgroup/rdma/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.550641817Z  + mount --bind /sys/fs/cgroup/rdma /sys/fs/cgroup/rdma/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.551961871Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.551974887Z  + local target=/sys/fs/cgroup/memory/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.551979418Z  + findmnt /sys/fs/cgroup/memory/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.553965667Z  + mkdir -p /sys/fs/cgroup/memory/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.555098664Z  + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.556707022Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.556726328Z  + local target=/sys/fs/cgroup/devices/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.556730861Z  + findmnt /sys/fs/cgroup/devices/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.558608335Z  + mkdir -p /sys/fs/cgroup/devices/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.559691381Z  + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.561100946Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.561116387Z  + local target=/sys/fs/cgroup/cpuset/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.561120903Z  + findmnt /sys/fs/cgroup/cpuset/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.562920076Z  + mkdir -p /sys/fs/cgroup/cpuset/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.597307309Z  + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.598832008Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.598864762Z  + local target=/sys/fs/cgroup/hugetlb/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.598869311Z  + findmnt /sys/fs/cgroup/hugetlb/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.600653125Z  + mkdir -p /sys/fs/cgroup/hugetlb/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.601936277Z  + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.603327936Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-05-09T08:54:48.603345656Z  + local target=/sys/fs/cgroup/perf_event/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.603356206Z  + findmnt /sys/fs/cgroup/perf_event/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.605703733Z  + mkdir -p /sys/fs/cgroup/perf_event/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.606930655Z  + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55
	2022-05-09T08:54:48.608352931Z  + mount --make-rprivate /sys/fs/cgroup
	2022-05-09T08:54:48.610192609Z  + echo '/sys/fs/cgroup/systemd
	2022-05-09T08:54:48.610206857Z  /sys/fs/cgroup/net_cls,net_prio
	2022-05-09T08:54:48.610220939Z  /sys/fs/cgroup/pids
	2022-05-09T08:54:48.610224594Z  /sys/fs/cgroup/blkio
	2022-05-09T08:54:48.610228030Z  /sys/fs/cgroup/cpu,cpuacct
	2022-05-09T08:54:48.610231475Z  /sys/fs/cgroup/freezer
	2022-05-09T08:54:48.610234947Z  /sys/fs/cgroup/rdma
	2022-05-09T08:54:48.610238274Z  /sys/fs/cgroup/memory
	2022-05-09T08:54:48.610252110Z  /sys/fs/cgroup/devices
	2022-05-09T08:54:48.610255784Z  /sys/fs/cgroup/cpuset
	2022-05-09T08:54:48.610259138Z  /sys/fs/cgroup/hugetlb
	2022-05-09T08:54:48.610262284Z  /sys/fs/cgroup/perf_event'
	2022-05-09T08:54:48.610272595Z  + IFS=
	2022-05-09T08:54:48.610276540Z  + read -r subsystem
	2022-05-09T08:54:48.610371307Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd
	2022-05-09T08:54:48.610393767Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.610447245Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-05-09T08:54:48.610464114Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.610513753Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet
	2022-05-09T08:54:48.611508052Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.611534686Z  + mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet
	2022-05-09T08:54:48.612962535Z  + IFS=
	2022-05-09T08:54:48.612979492Z  + read -r subsystem
	2022-05-09T08:54:48.612985176Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls,net_prio
	2022-05-09T08:54:48.613001130Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.613006620Z  + local subsystem=/sys/fs/cgroup/net_cls,net_prio
	2022-05-09T08:54:48.613245064Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.613257642Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-05-09T08:54:48.614415224Z  + '[' /sys/fs/cgroup/net_cls,net_prio == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.614430406Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio//kubelet /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-05-09T08:54:48.616087283Z  + IFS=
	2022-05-09T08:54:48.616101283Z  + read -r subsystem
	2022-05-09T08:54:48.616105704Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
	2022-05-09T08:54:48.616108868Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.616141544Z  + local subsystem=/sys/fs/cgroup/pids
	2022-05-09T08:54:48.616155294Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.616159239Z  + mkdir -p /sys/fs/cgroup/pids//kubelet
	2022-05-09T08:54:48.617284882Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.617303449Z  + mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
	2022-05-09T08:54:48.618668985Z  + IFS=
	2022-05-09T08:54:48.618683956Z  + read -r subsystem
	2022-05-09T08:54:48.618688310Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
	2022-05-09T08:54:48.618691995Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.618728923Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-05-09T08:54:48.618750290Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.618753462Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet
	2022-05-09T08:54:48.619763460Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.619770759Z  + mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
	2022-05-09T08:54:48.621114013Z  + IFS=
	2022-05-09T08:54:48.621127858Z  + read -r subsystem
	2022-05-09T08:54:48.621131952Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu,cpuacct
	2022-05-09T08:54:48.621135451Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.621139093Z  + local subsystem=/sys/fs/cgroup/cpu,cpuacct
	2022-05-09T08:54:48.621191001Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.621204858Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-05-09T08:54:48.622435221Z  + '[' /sys/fs/cgroup/cpu,cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.622450554Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct//kubelet /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-05-09T08:54:48.623904570Z  + IFS=
	2022-05-09T08:54:48.623919652Z  + read -r subsystem
	2022-05-09T08:54:48.623924396Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
	2022-05-09T08:54:48.623954773Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.623959048Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-05-09T08:54:48.623962278Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.623965670Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet
	2022-05-09T08:54:48.625029116Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.625046744Z  + mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
	2022-05-09T08:54:48.626317951Z  + IFS=
	2022-05-09T08:54:48.626332395Z  + read -r subsystem
	2022-05-09T08:54:48.626336767Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/rdma
	2022-05-09T08:54:48.626340462Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.626343634Z  + local subsystem=/sys/fs/cgroup/rdma
	2022-05-09T08:54:48.626346895Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.626350010Z  + mkdir -p /sys/fs/cgroup/rdma//kubelet
	2022-05-09T08:54:48.627496152Z  + '[' /sys/fs/cgroup/rdma == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.627511352Z  + mount --bind /sys/fs/cgroup/rdma//kubelet /sys/fs/cgroup/rdma//kubelet
	2022-05-09T08:54:48.628631281Z  + IFS=
	2022-05-09T08:54:48.628647043Z  + read -r subsystem
	2022-05-09T08:54:48.628650828Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
	2022-05-09T08:54:48.628653940Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.628693466Z  + local subsystem=/sys/fs/cgroup/memory
	2022-05-09T08:54:48.628698453Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.628701999Z  + mkdir -p /sys/fs/cgroup/memory//kubelet
	2022-05-09T08:54:48.632741012Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.632758647Z  + mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
	2022-05-09T08:54:48.632761680Z  + IFS=
	2022-05-09T08:54:48.632763932Z  + read -r subsystem
	2022-05-09T08:54:48.632766097Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
	2022-05-09T08:54:48.632768604Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.632770748Z  + local subsystem=/sys/fs/cgroup/devices
	2022-05-09T08:54:48.632772777Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.632774857Z  + mkdir -p /sys/fs/cgroup/devices//kubelet
	2022-05-09T08:54:48.632776920Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.632779220Z  + mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
	2022-05-09T08:54:48.633518582Z  + IFS=
	2022-05-09T08:54:48.633533342Z  + read -r subsystem
	2022-05-09T08:54:48.633537608Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
	2022-05-09T08:54:48.633554800Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.633557325Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-05-09T08:54:48.633559354Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.633561358Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet
	2022-05-09T08:54:48.634676883Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.634691047Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-05-09T08:54:48.635753652Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-05-09T08:54:48.636819371Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
	2022-05-09T08:54:48.638369533Z  + IFS=
	2022-05-09T08:54:48.638382618Z  + read -r subsystem
	2022-05-09T08:54:48.638386744Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
	2022-05-09T08:54:48.638480846Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.638497058Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-05-09T08:54:48.638501381Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.638504676Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet
	2022-05-09T08:54:48.639580000Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.639593873Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
	2022-05-09T08:54:48.641233270Z  + IFS=
	2022-05-09T08:54:48.641249737Z  + read -r subsystem
	2022-05-09T08:54:48.641254116Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
	2022-05-09T08:54:48.641257739Z  + local cgroup_root=/kubelet
	2022-05-09T08:54:48.641261106Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-05-09T08:54:48.641299852Z  + '[' -z /kubelet ']'
	2022-05-09T08:54:48.641312761Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet
	2022-05-09T08:54:48.642292336Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-05-09T08:54:48.642300239Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
	2022-05-09T08:54:48.643522811Z  + IFS=
	2022-05-09T08:54:48.643537566Z  + read -r subsystem
	2022-05-09T08:54:48.643924011Z  + return
	2022-05-09T08:54:48.643944798Z  + fix_machine_id
	2022-05-09T08:54:48.643948943Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2022-05-09T08:54:48.643984496Z  INFO: clearing and regenerating /etc/machine-id
	2022-05-09T08:54:48.643996949Z  + rm -f /etc/machine-id
	2022-05-09T08:54:48.644897924Z  + systemd-machine-id-setup
	2022-05-09T08:54:48.648696420Z  Initializing machine ID from D-Bus machine ID.
	2022-05-09T08:54:48.652007275Z  + fix_product_name
	2022-05-09T08:54:48.652029499Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2022-05-09T08:54:48.652166407Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2022-05-09T08:54:48.652188535Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2022-05-09T08:54:48.652193280Z  + echo kind
	2022-05-09T08:54:48.652317652Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2022-05-09T08:54:48.654576894Z  + fix_product_uuid
	2022-05-09T08:54:48.654920962Z  + [[ ! -f /kind/product_uuid ]]
	2022-05-09T08:54:48.654936501Z  + cat /proc/sys/kernel/random/uuid
	2022-05-09T08:54:48.656224502Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2022-05-09T08:54:48.656480423Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2022-05-09T08:54:48.656811969Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2022-05-09T08:54:48.657012896Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2022-05-09T08:54:48.658849907Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2022-05-09T08:54:48.658922027Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2022-05-09T08:54:48.658928802Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2022-05-09T08:54:48.658933089Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2022-05-09T08:54:48.660598758Z  + select_iptables
	2022-05-09T08:54:48.660640772Z  + local mode=nft
	2022-05-09T08:54:48.662524997Z  ++ wc -l
	2022-05-09T08:54:48.668976711Z  ++ grep '^-'
	2022-05-09T08:54:48.670423709Z  + num_legacy_lines=6
	2022-05-09T08:54:48.670444339Z  + '[' 6 -ge 10 ']'
	2022-05-09T08:54:48.671409455Z  ++ grep '^-'
	2022-05-09T08:54:48.671472590Z  ++ wc -l
	2022-05-09T08:54:48.676259391Z  ++ true
	2022-05-09T08:54:48.676528455Z  + num_nft_lines=0
	2022-05-09T08:54:48.676641878Z  + '[' 6 -ge 0 ']'
	2022-05-09T08:54:48.676654457Z  + mode=legacy
	2022-05-09T08:54:48.676659525Z  + echo 'INFO: setting iptables to detected mode: legacy'
	2022-05-09T08:54:48.676663297Z  INFO: setting iptables to detected mode: legacy
	2022-05-09T08:54:48.676668168Z  + update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-05-09T08:54:48.676711079Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
	2022-05-09T08:54:48.676716691Z  + local 'args=--set iptables /usr/sbin/iptables-legacy'
	2022-05-09T08:54:48.677129110Z  ++ seq 0 15
	2022-05-09T08:54:48.677887760Z  + for i in $(seq 0 15)
	2022-05-09T08:54:48.677923378Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-05-09T08:54:48.682231290Z  + return
	2022-05-09T08:54:48.682257261Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-05-09T08:54:48.682269494Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
	2022-05-09T08:54:48.682301012Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
	2022-05-09T08:54:48.682763162Z  ++ seq 0 15
	2022-05-09T08:54:48.683704961Z  + for i in $(seq 0 15)
	2022-05-09T08:54:48.683751511Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-05-09T08:54:48.687089186Z  + return
	2022-05-09T08:54:48.687110450Z  + enable_network_magic
	2022-05-09T08:54:48.687155319Z  + local docker_embedded_dns_ip=127.0.0.11
	2022-05-09T08:54:48.687171375Z  + local docker_host_ip
	2022-05-09T08:54:48.688428728Z  ++ cut '-d ' -f1
	2022-05-09T08:54:48.688441794Z  ++ head -n1 /dev/fd/63
	2022-05-09T08:54:48.688541379Z  +++ getent ahostsv4 host.docker.internal
	2022-05-09T08:54:48.704892876Z  + docker_host_ip=
	2022-05-09T08:54:48.704927706Z  + [[ -z '' ]]
	2022-05-09T08:54:48.705672035Z  ++ ip -4 route show default
	2022-05-09T08:54:48.705697608Z  ++ cut '-d ' -f3
	2022-05-09T08:54:48.707997158Z  + docker_host_ip=192.168.58.1
	2022-05-09T08:54:48.708291965Z  + iptables-save
	2022-05-09T08:54:48.708673076Z  + iptables-restore
	2022-05-09T08:54:48.710278828Z  + sed -e 's/-d 127.0.0.11/-d 192.168.58.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.58.1:53/g'
	2022-05-09T08:54:48.713519355Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2022-05-09T08:54:48.715214009Z  + sed -e s/127.0.0.11/192.168.58.1/g /etc/resolv.conf.original
	2022-05-09T08:54:48.721381801Z  ++ cut '-d ' -f1
	2022-05-09T08:54:48.721495582Z  ++ head -n1 /dev/fd/63
	2022-05-09T08:54:48.722167585Z  ++++ hostname
	2022-05-09T08:54:48.722925563Z  +++ getent ahostsv4 kubernetes-upgrade-20220509085441-6723
	2022-05-09T08:54:48.725827211Z  + curr_ipv4=192.168.58.2
	2022-05-09T08:54:48.725843524Z  + echo 'INFO: Detected IPv4 address: 192.168.58.2'
	2022-05-09T08:54:48.725847584Z  INFO: Detected IPv4 address: 192.168.58.2
	2022-05-09T08:54:48.725850354Z  + '[' -f /kind/old-ipv4 ']'
	2022-05-09T08:54:48.725963468Z  + [[ -n 192.168.58.2 ]]
	2022-05-09T08:54:48.725978216Z  + echo -n 192.168.58.2
	2022-05-09T08:54:48.727205326Z  ++ cut '-d ' -f1
	2022-05-09T08:54:48.727336070Z  ++ head -n1 /dev/fd/63
	2022-05-09T08:54:48.728589155Z  ++++ hostname
	2022-05-09T08:54:48.729390159Z  +++ getent ahostsv6 kubernetes-upgrade-20220509085441-6723
	2022-05-09T08:54:48.731247255Z  + curr_ipv6=
	2022-05-09T08:54:48.731361196Z  + echo 'INFO: Detected IPv6 address: '
	2022-05-09T08:54:48.731538731Z  INFO: Detected IPv6 address: 
	2022-05-09T08:54:48.731559946Z  + '[' -f /kind/old-ipv6 ']'
	2022-05-09T08:54:48.731563664Z  + [[ -n '' ]]
	2022-05-09T08:54:48.732112224Z  ++ uname -a
	2022-05-09T08:54:48.732929561Z  + echo 'entrypoint completed: Linux kubernetes-upgrade-20220509085441-6723 5.13.0-1024-gcp #29~20.04.1-Ubuntu SMP Thu Apr 14 23:15:00 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux'
	2022-05-09T08:54:48.733021973Z  entrypoint completed: Linux kubernetes-upgrade-20220509085441-6723 5.13.0-1024-gcp #29~20.04.1-Ubuntu SMP Thu Apr 14 23:15:00 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	2022-05-09T08:54:48.733125482Z  + exec /sbin/init
	2022-05-09T08:54:48.741524396Z  systemd 245.4-4ubuntu3.15 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
	2022-05-09T08:54:48.741551150Z  Detected virtualization docker.
	2022-05-09T08:54:48.741554542Z  Detected architecture x86-64.
	2022-05-09T08:54:48.742168314Z  
	2022-05-09T08:54:48.742187397Z  Welcome to Ubuntu 20.04.4 LTS!
	2022-05-09T08:54:48.742192333Z  
	2022-05-09T08:54:48.742247475Z  Set hostname to <kubernetes-upgrade-20220509085441-6723>.
	2022-05-09T08:54:48.800205113Z  [  OK  ] Started Dispatch Password …ts to Console Directory Watch.
	2022-05-09T08:54:48.800438379Z  [  OK  ] Set up automount Arbitrary…s File System Automount Point.
	2022-05-09T08:54:48.800459193Z  [  OK  ] Reached target Local Encrypted Volumes.
	2022-05-09T08:54:48.800464673Z  [  OK  ] Reached target Network is Online.
	2022-05-09T08:54:48.800583296Z  [  OK  ] Reached target Paths.
	2022-05-09T08:54:48.800589856Z  [  OK  ] Reached target Slices.
	2022-05-09T08:54:48.800594456Z  [  OK  ] Reached target Swap.
	2022-05-09T08:54:48.800966471Z  [  OK  ] Listening on Journal Audit Socket.
	2022-05-09T08:54:48.801111687Z  [  OK  ] Listening on Journal Socket (/dev/log).
	2022-05-09T08:54:48.801292328Z  [  OK  ] Listening on Journal Socket.
	2022-05-09T08:54:48.803726050Z           Mounting Huge Pages File System...
	2022-05-09T08:54:48.805730665Z           Mounting Kernel Debug File System...
	2022-05-09T08:54:48.809219084Z           Mounting Kernel Trace File System...
	2022-05-09T08:54:48.811132246Z           Starting Journal Service...
	2022-05-09T08:54:48.851486030Z           Starting Create list of st…odes for the current kernel...
	2022-05-09T08:54:48.851541489Z           Mounting FUSE Control File System...
	2022-05-09T08:54:48.853491712Z           Starting Remount Root and Kernel File Systems...
	2022-05-09T08:54:48.863171459Z           Starting Apply Kernel Variables...
	2022-05-09T08:54:48.864697038Z  [  OK  ] Started Journal Service.
	2022-05-09T08:54:48.865160056Z  [  OK  ] Mounted Huge Pages File System.
	2022-05-09T08:54:48.865336584Z  [  OK  ] Mounted Kernel Debug File System.
	2022-05-09T08:54:48.865446089Z  [  OK  ] Mounted Kernel Trace File System.
	2022-05-09T08:54:48.866423358Z  [  OK  ] Finished Create list of st… nodes for the current kernel.
	2022-05-09T08:54:48.868044100Z  [  OK  ] Mounted FUSE Control File System.
	2022-05-09T08:54:48.868058259Z  [  OK  ] Finished Remount Root and Kernel File Systems.
	2022-05-09T08:54:48.869309096Z           Starting Flush Journal to Persistent Storage...
	2022-05-09T08:54:48.870576065Z           Starting Create System Users...
	2022-05-09T08:54:48.872015600Z           Starting Update UTMP about System Boot/Shutdown...
	2022-05-09T08:54:48.874644249Z  [  OK  ] Finished Apply Kernel Variables.
	2022-05-09T08:54:48.876972947Z  [  OK  ] Finished Flush Journal to Persistent Storage.
	2022-05-09T08:54:48.884175971Z  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
	2022-05-09T08:54:48.894060711Z  [  OK  ] Finished Create System Users.
	2022-05-09T08:54:48.895689953Z           Starting Create Static Device Nodes in /dev...
	2022-05-09T08:54:48.902614909Z  [  OK  ] Finished Create Static Device Nodes in /dev.
	2022-05-09T08:54:48.902714982Z  [  OK  ] Reached target Local File Systems (Pre).
	2022-05-09T08:54:48.902867181Z  [  OK  ] Reached target Local File Systems.
	2022-05-09T08:54:48.903012035Z  [  OK  ] Reached target System Initialization.
	2022-05-09T08:54:48.903032663Z  [  OK  ] Started Daily Cleanup of Temporary Directories.
	2022-05-09T08:54:48.903077708Z  [  OK  ] Reached target Timers.
	2022-05-09T08:54:48.903254013Z  [  OK  ] Listening on BuildKit.
	2022-05-09T08:54:48.903375631Z  [  OK  ] Listening on D-Bus System Message Bus Socket.
	2022-05-09T08:54:48.904730907Z           Starting Docker Socket for the API.
	2022-05-09T08:54:48.907043709Z           Starting Podman API Socket.
	2022-05-09T08:54:48.907454928Z  [  OK  ] Listening on Docker Socket for the API.
	2022-05-09T08:54:48.908446763Z  [  OK  ] Listening on Podman API Socket.
	2022-05-09T08:54:48.908461725Z  [  OK  ] Reached target Sockets.
	2022-05-09T08:54:48.908467114Z  [  OK  ] Reached target Basic System.
	2022-05-09T08:54:48.909647516Z           Starting containerd container runtime...
	2022-05-09T08:54:48.911079878Z  [  OK  ] Started D-Bus System Message Bus.
	2022-05-09T08:54:48.914052209Z           Starting minikube automount...
	2022-05-09T08:54:48.915380293Z           Starting OpenBSD Secure Shell server...
	2022-05-09T08:54:48.942091320Z  [  OK  ] Finished minikube automount.
	2022-05-09T08:54:48.980803941Z  [  OK  ] Started OpenBSD Secure Shell server.
	2022-05-09T08:54:49.025097606Z  [  OK  ] Started containerd container runtime.
	2022-05-09T08:54:49.028943848Z           Starting Docker Application Container Engine...
	2022-05-09T08:54:49.388449759Z  [  OK  ] Started Docker Application Container Engine.
	2022-05-09T08:54:49.388489289Z  [  OK  ] Reached target Multi-User System.
	2022-05-09T08:54:49.388495868Z  [  OK  ] Reached target Graphical Interface.
	2022-05-09T08:54:49.390075201Z           Starting Update UTMP about System Runlevel Changes...
	2022-05-09T08:54:49.399372972Z  [  OK  ] Finished Update UTMP about System Runlevel Changes.
	2022-05-09T08:55:24.515684789Z  [  OK  ] Stopped target Graphical Interface.
	2022-05-09T08:55:24.515867703Z  [  OK  ] Stopped target Multi-User System.
	2022-05-09T08:55:24.516069122Z  [  OK  ] Stopped target Timers.
	2022-05-09T08:55:24.516232339Z  [  OK  ] Stopped Daily Cleanup of Temporary Directories.
	2022-05-09T08:55:24.519316635Z           Stopping D-Bus System Message Bus...
	2022-05-09T08:55:24.519582766Z           Stopping Docker Application Container Engine...
	2022-05-09T08:55:24.519843067Z           Stopping kubelet: The Kubernetes Node Agent...
	2022-05-09T08:55:24.520056189Z           Stopping OpenBSD Secure Shell server...
	2022-05-09T08:55:24.521788264Z  [  OK  ] Stopped D-Bus System Message Bus.
	2022-05-09T08:55:24.523074666Z  [  OK  ] Stopped OpenBSD Secure Shell server.
	2022-05-09T08:55:24.566159260Z  [  OK  ] Stopped kubelet: The Kubernetes Node Agent.
	2022-05-09T08:55:24.869124769Z  [  OK  ] Unmounted /var/lib/docker/…ee005ef76b2749893a3738/merged.
	2022-05-09T08:55:24.893760964Z  [  OK  ] Unmounted /var/lib/docker/…a0eb1c8e5d2467b89a7e07/merged.
	2022-05-09T08:55:24.910495067Z  [  OK  ] Unmounted /var/lib/docker/…538a4f4e67e2c7821c1139/merged.
	2022-05-09T08:55:24.915653406Z  [  OK  ] Unmounted /var/lib/docker/…050dba3b2923467dbe065d/merged.
	2022-05-09T08:55:24.919648050Z  [  OK  ] Unmounted /var/lib/docker/…ffae74027356e06d4f50f9/merged.
	2022-05-09T08:55:24.920955023Z  [  OK  ] Unmounted /var/lib/docker/…6b32c73da71531b6a3/mounts/shm.
	2022-05-09T08:55:24.921002151Z  [  OK  ] Unmounted /var/lib/docker/…c64a6ca6534a56da9795ba/merged.
	2022-05-09T08:55:24.928125021Z  [  OK  ] Unmounted /var/lib/docker/…8835f4b20ae9f34d9a/mounts/shm.
	2022-05-09T08:55:24.928151735Z  [  OK  ] Unmounted /var/lib/docker/…39f73d9ad064762af314d9/merged.
	2022-05-09T08:55:24.965615143Z  [  OK  ] Unmounted /var/lib/docker/…46f3b6963233d8fc86/mounts/shm.
	2022-05-09T08:55:24.966643475Z  [  OK  ] Unmounted /var/lib/docker/…63c55b66636b13c700df78/merged.
	2022-05-09T08:55:24.971831772Z  [  OK  ] Unmounted /var/lib/docker/…9a93d5503632eea0bd/mounts/shm.
	2022-05-09T08:55:24.973331139Z  [  OK  ] Unmounted /var/lib/docker/…fb16684640e9f42e2d6fa8/merged.
	2022-05-09T08:55:24.981349077Z  [  OK  ] Unmounted /var/lib/docker/…1995469e919e04d7a8/mounts/shm.
	2022-05-09T08:55:24.981464910Z  [  OK  ] Unmounted /var/lib/docker/…484bcb070061b068af436c/merged.
	2022-05-09T08:55:24.981636453Z  [  OK  ] Unmounted /var/lib/docker/…ecd58c34f93cbafa82/mounts/shm.
	2022-05-09T08:55:24.984855002Z  [  OK  ] Unmounted /var/lib/docker/…9550b71290b8c9796e0e98/merged.
	2022-05-09T08:55:27.946224275Z  [*     ] A stop job is running for Docker Ap…n Container Engine (1s / 1min 28s)
	2022-05-09T08:55:28.446217891Z  M
[**    ] A stop job is running for Docker Ap…n Container Engine (2s / 1min 28s)
	2022-05-09T08:55:28.946206312Z  M
[***   ] A stop job is running for Docker Ap…n Container Engine (2s / 1min 28s)
	2022-05-09T08:55:29.446197101Z  M
[ ***  ] A stop job is running for Docker Ap…n Container Engine (3s / 1min 28s)
	2022-05-09T08:55:29.946174641Z  M
[  *** ] A stop job is running for Docker Ap…n Container Engine (3s / 1min 28s)
	2022-05-09T08:55:30.446202226Z  M
[   ***] A stop job is running for Docker Ap…n Container Engine (4s / 1min 28s)
	2022-05-09T08:55:30.946343866Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (4s / 1min 28s)
	2022-05-09T08:55:31.446240569Z  M
[     *] A stop job is running for Docker Ap…n Container Engine (5s / 1min 28s)
	2022-05-09T08:55:31.946210252Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (5s / 1min 28s)
	2022-05-09T08:55:32.446244845Z  M
[   ***] A stop job is running for Docker Ap…n Container Engine (6s / 1min 28s)
	2022-05-09T08:55:32.946266274Z  M
[  *** ] A stop job is running for Docker Ap…n Container Engine (6s / 1min 28s)
	2022-05-09T08:55:33.446269782Z  M
[ ***  ] A stop job is running for Docker Ap…n Container Engine (7s / 1min 28s)
	2022-05-09T08:55:33.946217519Z  M
[***   ] A stop job is running for Docker Ap…n Container Engine (7s / 1min 28s)
	2022-05-09T08:55:34.446221269Z  M
[**    ] A stop job is running for Docker Ap…n Container Engine (8s / 1min 28s)
	2022-05-09T08:55:34.741819053Z  M
[  OK  ] Unmounted /var/lib/docker/…4d90fcc19c9dd2185e8206/merged.
	2022-05-09T08:55:34.762903828Z  [  OK  ] Stopped Docker Application Container Engine.
	2022-05-09T08:55:34.763006369Z  [  OK  ] Stopped target Network is Online.
	2022-05-09T08:55:34.763073811Z           Stopping containerd container runtime...
	2022-05-09T08:55:34.763801528Z  [  OK  ] Stopped minikube automount.
	2022-05-09T08:55:34.767383402Z  [  OK  ] Stopped containerd container runtime.
	2022-05-09T08:55:34.767535024Z  [  OK  ] Stopped target Basic System.
	2022-05-09T08:55:34.767551301Z  [  OK  ] Stopped target Paths.
	2022-05-09T08:55:34.767613735Z  [  OK  ] Stopped target Slices.
	2022-05-09T08:55:34.767641754Z  [  OK  ] Stopped target Sockets.
	2022-05-09T08:55:34.768262704Z  [  OK  ] Closed BuildKit.
	2022-05-09T08:55:34.768843514Z  [  OK  ] Closed D-Bus System Message Bus Socket.
	2022-05-09T08:55:34.769371617Z  [  OK  ] Closed Docker Socket for the API.
	2022-05-09T08:55:34.769908808Z  [  OK  ] Closed Podman API Socket.
	2022-05-09T08:55:34.769915903Z  [  OK  ] Stopped target System Initialization.
	2022-05-09T08:55:34.769948889Z  [  OK  ] Stopped target Local Encrypted Volumes.
	2022-05-09T08:55:34.785642278Z  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
	2022-05-09T08:55:34.785987158Z  [  OK  ] Stopped target Local File Systems.
	2022-05-09T08:55:34.787006032Z           Unmounting /data...
	2022-05-09T08:55:34.790417016Z           Unmounting /etc/hostname...
	2022-05-09T08:55:34.790441045Z           Unmounting /etc/hosts...
	2022-05-09T08:55:34.790445391Z           Unmounting /etc/resolv.conf...
	2022-05-09T08:55:34.790640208Z           Unmounting /kind/product_uuid...
	2022-05-09T08:55:34.792699462Z           Unmounting /run/docker/netns/default...
	2022-05-09T08:55:34.794912239Z           Unmounting /tmp/hostpath-provisioner...
	2022-05-09T08:55:34.797678077Z           Unmounting /tmp/hostpath_pv...
	2022-05-09T08:55:34.798619944Z           Unmounting /usr/lib/modules...
	2022-05-09T08:55:34.800275815Z           Unmounting /var/lib/kubele…cret/kube-proxy-token-cgwjt...
	2022-05-09T08:55:34.802011240Z           Unmounting /var/lib/kubele…~secret/coredns-token-6fhnf...
	2022-05-09T08:55:34.803417213Z           Unmounting /var/lib/kubele…age-provisioner-token-p8vbs...
	2022-05-09T08:55:34.804104710Z  [  OK  ] Stopped Apply Kernel Variables.
	2022-05-09T08:55:34.804963727Z           Stopping Update UTMP about System Boot/Shutdown...
	2022-05-09T08:55:34.808097022Z  [  OK  ] Unmounted /data.
	2022-05-09T08:55:34.808887823Z  [  OK  ] Unmounted /etc/hostname.
	2022-05-09T08:55:34.809561466Z  [  OK  ] Unmounted /etc/hosts.
	2022-05-09T08:55:34.810423733Z  [  OK  ] Unmounted /etc/resolv.conf.
	2022-05-09T08:55:34.811154862Z  [  OK  ] Unmounted /kind/product_uuid.
	2022-05-09T08:55:34.811897053Z  [  OK  ] Unmounted /run/docker/netns/default.
	2022-05-09T08:55:34.812589174Z  [  OK  ] Unmounted /tmp/hostpath-provisioner.
	2022-05-09T08:55:34.813609878Z  [  OK  ] Unmounted /tmp/hostpath_pv.
	2022-05-09T08:55:34.814478125Z  [  OK  ] Unmounted /usr/lib/modules.
	2022-05-09T08:55:34.815328236Z  [  OK  ] Unmounted /var/lib/kubelet…secret/kube-proxy-token-cgwjt.
	2022-05-09T08:55:34.816365714Z  [  OK  ] Unmounted /var/lib/kubelet…io~secret/coredns-token-6fhnf.
	2022-05-09T08:55:34.817679811Z  [  OK  ] Unmounted /var/lib/kubelet…orage-provisioner-token-p8vbs.
	2022-05-09T08:55:34.820457428Z           Unmounting /tmp...
	2022-05-09T08:55:34.824128102Z  [  OK  ] Stopped Update UTMP about System Boot/Shutdown.
	2022-05-09T08:55:34.825867452Z  [  OK  ] Unmounted /tmp.
	2022-05-09T08:55:34.826089373Z  [  OK  ] Stopped target Swap.
	2022-05-09T08:55:34.826902988Z           Unmounting /var...
	2022-05-09T08:55:34.831108502Z  [  OK  ] Unmounted /var.
	2022-05-09T08:55:34.831294727Z  [  OK  ] Stopped target Local File Systems (Pre).
	2022-05-09T08:55:34.831375743Z  [  OK  ] Reached target Unmount All Filesystems.
	2022-05-09T08:55:34.832107670Z  [  OK  ] Stopped Create Static Device Nodes in /dev.
	2022-05-09T08:55:34.832891870Z  [  OK  ] Stopped Create System Users.
	2022-05-09T08:55:34.833667869Z  [  OK  ] Stopped Remount Root and Kernel File Systems.
	2022-05-09T08:55:34.833743159Z  [  OK  ] Reached target Shutdown.
	2022-05-09T08:55:34.833751217Z  [  OK  ] Reached target Final Step.
	2022-05-09T08:55:34.838833342Z           Starting Halt...
	2022-05-09T08:55:34.838855197Z  [  OK  ] Finished Power-Off.
	2022-05-09T08:55:34.838861739Z  [  OK  ] Reached target Power-Off.
	
	-- /stdout --
	I0509 08:55:47.825621  195809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:55:47.943524  195809 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:60 SystemTime:2022-05-09 08:55:47.855898781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:47.943645  195809 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:60 SystemTime:2022-05-09 08:55:47.855898781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:47.943734  195809 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220509085441-6723] to gather additional debugging logs...
	I0509 08:55:47.943763  195809 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220509085441-6723
	W0509 08:55:47.981416  195809 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:47.981462  195809 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220509085441-6723]: docker network inspect kubernetes-upgrade-20220509085441-6723: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220509085441-6723
	I0509 08:55:47.981484  195809 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220509085441-6723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220509085441-6723
	
	** /stderr **
	I0509 08:55:47.981626  195809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:55:48.091808  195809 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:64 SystemTime:2022-05-09 08:55:48.012202948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:55:48.092477  195809 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220509085441-6723
	I0509 08:55:48.129447  195809 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kubernetes-upgrade-20220509085441-6723/config.json ...
	I0509 08:55:48.129662  195809 machine.go:88] provisioning docker machine ...
	I0509 08:55:48.129691  195809 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220509085441-6723"
	I0509 08:55:48.129729  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:48.171025  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:48.171091  195809 machine.go:91] provisioned docker machine in 41.409078ms
	I0509 08:55:48.171154  195809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 08:55:48.171198  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:48.215411  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:48.215538  195809 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:48.416851  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:48.453221  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:48.453351  195809 retry.go:31] will retry after 380.704736ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:48.834989  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:48.898286  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:48.898412  195809 retry.go:31] will retry after 738.922478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:49.637796  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:49.673946  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	W0509 08:55:49.674049  195809 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0509 08:55:49.674064  195809 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:49.674103  195809 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0509 08:55:49.674132  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:49.708247  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:49.708398  195809 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:49.928786  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:49.961605  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:49.961736  195809 retry.go:31] will retry after 306.771815ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:50.269309  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:50.314402  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:50.314510  195809 retry.go:31] will retry after 545.000538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:50.860350  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:50.900331  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	I0509 08:55:50.900466  195809 retry.go:31] will retry after 660.685065ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:51.561867  195809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723
	W0509 08:55:51.597102  195809 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220509085441-6723 returned with exit code 1
	W0509 08:55:51.597259  195809 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0509 08:55:51.597283  195809 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:51.597299  195809 fix.go:57] fixHost completed within 6.364361969s
	I0509 08:55:51.597314  195809 start.go:81] releasing machines lock for "kubernetes-upgrade-20220509085441-6723", held for 6.364396864s
	W0509 08:55:51.597584  195809 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220509085441-6723" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220509085441-6723" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:55:51.600201  195809 out.go:177] 
	W0509 08:55:51.601977  195809 out.go:239] X Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.
	X Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.
	I0509 08:55:51.603713  195809 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220509085441-6723 --memory=2200 --kubernetes-version=v1.24.1-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker : exit status 80
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220509085441-6723 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220509085441-6723 version --output=json: exit status 1 (58.754877ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-20220509085441-6723" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-05-09 08:55:51.743109186 +0000 UTC m=+1877.186689802
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220509085441-6723
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220509085441-6723:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55",
	        "Created": "2022-05-09T08:54:48.053286897Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network ab3dc0e987463e0d1919f0f30610dba9f36f6956dd59c0979fddc98d5b653e0f not found",
	            "StartedAt": "2022-05-09T08:54:48.472807369Z",
	            "FinishedAt": "2022-05-09T08:55:34.933426699Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/hostname",
	        "HostsPath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/hosts",
	        "LogPath": "/var/lib/docker/containers/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55/9eeef2af4bd83a3b8abf5b3e39a735900dfb44e46340fb8cb4634c5469472a55-json.log",
	        "Name": "/kubernetes-upgrade-20220509085441-6723",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-20220509085441-6723:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220509085441-6723",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866-init/diff:/var/lib/docker/overlay2/beaaca4c58fe6ff4bdb88567c3d78ab7a23955eafaa5637df03ee2e0482d2aa6/diff:/var/lib/docker/overlay2/7c16b810bbfa3a2abff75078fa37b4cba0b2f101ff43d49beaabc3fd2602b1c9/diff:/var/lib/docker/overlay2/60f04c0e4baa8ad1c02ae5e34e6f505db0d740e2d7dc0833b2ff3b8037c1a9b6/diff:/var/lib/docker/overlay2/a12543300ae4ff803b2f0493a60a04a921312ec5a7b6ed493e66acadf998daef/diff:/var/lib/docker/overlay2/2d68f658a64cd8b7255bce93547db5d1b20b119ef6da8b9ce06614134661f235/diff:/var/lib/docker/overlay2/0f968f210c1565f6e8c4e444c650502c06a120d8767c9fefb7b6b2a09f4af83b/diff:/var/lib/docker/overlay2/987a67893acdccd514357a50db6c11680ae1899ec07a36085367a241ba1f0545/diff:/var/lib/docker/overlay2/d5446d1adc007f4390aa6173ef257b0bf52a9c9cc6533f16dd6c577fde5334b3/diff:/var/lib/docker/overlay2/265dd0cde77d578f6412ad48596c75c3548b5e077059c0fa835e4f22775fab83/diff:/var/lib/docker/overlay2/e6b98b
08dd64e06639e2a321169f84a89e8d21cea02d673819156eeb7a4747c3/diff:/var/lib/docker/overlay2/acf7117613a840e97b7a1101bfdce3a154a335ee0ffcaf74bd7f1b27b00cdaab/diff:/var/lib/docker/overlay2/9e88fa64aa800dac6e01eddc9cfd525a0dcd906c1adf98de066be87f87a6b52c/diff:/var/lib/docker/overlay2/6c539edead197cd449ca81fd34e3c534a6ca25446a52141d0ae4484aaac05482/diff:/var/lib/docker/overlay2/866719c52f41a692b94f42071d314de55b290912f946f4ec5f3785a7a5ae40df/diff:/var/lib/docker/overlay2/1c488a9b3bad141652873f4085ce1a466e1f1e8ccbd086d03a492b45179a6064/diff:/var/lib/docker/overlay2/bce5b942da09326e5a5c8595077544ced08031b1cd9b22ff8bff0a4458540139/diff:/var/lib/docker/overlay2/ffe34c3764a4eeac791f4672d47389ec3f399f716ab6e5531d7c5e587f3ace00/diff:/var/lib/docker/overlay2/a377e0779d03467b09a26898ae35b12f325d62984304817f49c693b2146c08f4/diff:/var/lib/docker/overlay2/6092380f07b29488cf0a30cb486638d86eaa8e00ff356a7985c6ac6f2fae5c1d/diff:/var/lib/docker/overlay2/1bc014cc0cf0a91c61f131c06b0194709f853dec23defe9233cbd9cc40030c28/diff:/var/lib/d
ocker/overlay2/0c81c4db7384c48318a800165af7f811348a8081efdd9dbd912e05e55c9eb4e0/diff:/var/lib/docker/overlay2/72b0c515d90bc71e27b766a9be89e315777a5bbe643d8fc508a9ae12557a58ce/diff:/var/lib/docker/overlay2/a4d193bf8c377d4cf1357b9261d3c54d995f17bda3db5abee3ef5caec001d75e/diff:/var/lib/docker/overlay2/763c6d291074b0842ecdeec1f3842fd6a0af0cb86839c82bc38cec0f40d095ca/diff:/var/lib/docker/overlay2/e4156eb2a94ac7136eb674fb8c22ea7f6dff50cd81e4857119d81112dc5ad99d/diff:/var/lib/docker/overlay2/be7effa3bb906b8c48aefab3cc72e657931bccd42c35b03bb52c679c37c70d25/diff:/var/lib/docker/overlay2/ccf8c1a68774cc6129c43490c675e6ba0ff0c88284ad899c9efb4b7492e92a06/diff:/var/lib/docker/overlay2/55f74cddb5c8f2da1131ddd67f7f2a20b8c2be8719b71047d5185fdb4722627d/diff:/var/lib/docker/overlay2/f6c38e83545e9de87d03b1e8e9b0239079acf47784afc11291b2172a11f8296d/diff:/var/lib/docker/overlay2/24604ce83180fcb6b9e1adbaf37db6776e4949ca5e1f4f9050b8fb8dc87d7591/diff:/var/lib/docker/overlay2/b03b1952bea32e53ce88b0d92d09f78ca76ceca0a146b088628be224813
ae87c/diff:/var/lib/docker/overlay2/d7ee02a5ba355c246c3107f286c4130457358aa56e0d8feb3850d953812fe76d/diff:/var/lib/docker/overlay2/1223ae0e5105ae89c78524490f0ccc5fe6dfa373e175fa70474b06c62aad91c7/diff:/var/lib/docker/overlay2/c3209d71eed94ed66b11ba3a7573c347d5c09a5a4fedb00418e52571abb2b5f9/diff:/var/lib/docker/overlay2/d6d32632e36023dec15e643fd41f77168442327cecdead87a47e70683cb0c660/diff:/var/lib/docker/overlay2/fd413b21f1f34027c98a1b4106f7e7db83e910860edd53ef838ac699659cd451/diff:/var/lib/docker/overlay2/9768e3244c157a7a12b8ec31507c1abc0fc806a9f325099262cfeb41e23a5fe1/diff:/var/lib/docker/overlay2/f65dda9d5b79903f5e68f6b9d4a59214d62f304da77faa7e148b90b402d98ca4/diff:/var/lib/docker/overlay2/b663c2dab6b5df7b606daa62e1df5e57f4a2503c2a9a19211f47359fd685dccf/diff:/var/lib/docker/overlay2/bbd620a7e494844db80a2bcd2fd6f170080c5273f1c33a576501214dd4475464/diff:/var/lib/docker/overlay2/23de36667695cec8ea0f4c98fc580213d356d39a3eba5aaab8b6ebeeb2b71596/diff:/var/lib/docker/overlay2/bafb3e8d91e83dcf824b6d5b3f56a67c483c42
723724b009a5aedf578351154f/diff:/var/lib/docker/overlay2/2798e8ce2f51dde257a9cf2dd800492f888fd02515c5f1de4133cc787ee12928/diff:/var/lib/docker/overlay2/e69a88f2dffdd2c7f72c45eaf2e3cd1a8772e0a2af22d6f34d2695bbda62b6e9/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c80c52a664c1dfd1bf7928b1d2ccd53938fa6eb9de02e184ba37e8eb872a866/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220509085441-6723",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220509085441-6723/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220509085441-6723",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220509085441-6723",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220509085441-6723",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d9bdd6efabd7d19b0c4b1317d302d4bdff4657c9fa9a1027917f2a96a3ba380",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/4d9bdd6efabd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220509085441-6723": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9eeef2af4bd8",
	                        "kubernetes-upgrade-20220509085441-6723"
	                    ],
	                    "NetworkID": "ab3dc0e987463e0d1919f0f30610dba9f36f6956dd59c0979fddc98d5b653e0f",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220509085441-6723 -n kubernetes-upgrade-20220509085441-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220509085441-6723 -n kubernetes-upgrade-20220509085441-6723: exit status 7 (101.215208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20220509085441-6723" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220509085441-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220509085441-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220509085441-6723: (1.827510971s)
--- FAIL: TestKubernetesUpgrade (71.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220509085724-6723 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context no-preload-20220509085724-6723 create -f testdata/busybox.yaml: exit status 1 (40.901308ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220509085724-6723" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context no-preload-20220509085724-6723 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (43.107257ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (72.322626ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (45.630189ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (68.344414ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220509085724-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220509085724-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (70.477059ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "no-preload-20220509085724-6723" does not exist
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:209: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220509085724-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220509085724-6723 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context no-preload-20220509085724-6723 describe deploy/metrics-server -n kube-system: exit status 1 (39.543967ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220509085724-6723" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context no-preload-20220509085724-6723 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (44.207544ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (67.988919ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220509085724-6723 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-20220509085724-6723 --alsologtostderr -v=3: exit status 85 (68.072732ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:57:25.535346  228926 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:57:25.535478  228926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:25.535489  228926 out.go:309] Setting ErrFile to fd 2...
	I0509 08:57:25.535494  228926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:25.535632  228926 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:57:25.535847  228926 out.go:303] Setting JSON to false
	I0509 08:57:25.535884  228926 mustload.go:65] Loading cluster: no-preload-20220509085724-6723
	I0509 08:57:25.538515  228926 out.go:177] * Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	I0509 08:57:25.540384  228926 out.go:177]   To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-20220509085724-6723 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (43.899048ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (68.088404ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (66.340657ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 85 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\""*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220509085724-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220509085724-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (72.722531ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "no-preload-20220509085724-6723" does not exist
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:250: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220509085724-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (44.646528ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (69.023522ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
start_stop_delete_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (68.133551ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:264: status error: exit status 85 (may be ok)
start_stop_delete_test.go:266: expected host status after start-stop-start to be -"Running"- but got *"* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (43.866123ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (71.813668ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220509085724-6723" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (46.008858ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (69.596402ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20220509085724-6723" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220509085724-6723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context no-preload-20220509085724-6723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (46.979521ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20220509085724-6723" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-20220509085724-6723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (46.102596ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (70.69359ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220509085724-6723 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p no-preload-20220509085724-6723 "sudo crictl images -o json": exit status 85 (67.381998ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-linux-amd64 ssh -p no-preload-20220509085724-6723 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:306: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"
start_stop_delete_test.go:306: v1.24.1-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.3-0",
- 	"k8s.gcr.io/kube-apiserver:v1.24.1-rc.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.24.1-rc.0",
- 	"k8s.gcr.io/kube-proxy:v1.24.1-rc.0",
- 	"k8s.gcr.io/kube-scheduler:v1.24.1-rc.0",
- 	"k8s.gcr.io/pause:3.7",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (43.771502ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (79.624701ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220509085724-6723 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-20220509085724-6723 --alsologtostderr -v=1: exit status 85 (73.198224ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:57:26.696837  229235 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:57:26.696984  229235 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:26.696996  229235 out.go:309] Setting ErrFile to fd 2...
	I0509 08:57:26.697002  229235 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:26.697136  229235 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:57:26.697320  229235 out.go:303] Setting JSON to false
	I0509 08:57:26.697340  229235 mustload.go:65] Loading cluster: no-preload-20220509085724-6723
	I0509 08:57:26.699868  229235 out.go:177] * Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	I0509 08:57:26.701615  229235 out.go:177]   To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-linux-amd64 pause -p no-preload-20220509085724-6723 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (44.806072ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (70.359049ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220509085724-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20220509085724-6723: exit status 1 (45.075448ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: no-preload-20220509085724-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220509085724-6723 -n no-preload-20220509085724-6723: exit status 85 (69.443116ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20220509085724-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20220509085724-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20220509085724-6723" host is not running, skipping log retrieval (state="* Profile \"no-preload-20220509085724-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20220509085724-6723\"")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220509085727-6723 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context embed-certs-20220509085727-6723 create -f testdata/busybox.yaml: exit status 1 (46.523775ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220509085727-6723" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context embed-certs-20220509085727-6723 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (53.659392ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (81.400385ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (51.60948ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (73.49308ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220509085727-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220509085727-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (112.513451ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "embed-certs-20220509085727-6723" does not exist
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:209: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220509085727-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220509085727-6723 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context embed-certs-20220509085727-6723 describe deploy/metrics-server -n kube-system: exit status 1 (42.997002ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220509085727-6723" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20220509085727-6723 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (58.516593ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (74.139155ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220509085727-6723 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-20220509085727-6723 --alsologtostderr -v=3: exit status 85 (76.538728ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:57:27.836323  229767 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:57:27.836505  229767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:27.836519  229767 out.go:309] Setting ErrFile to fd 2...
	I0509 08:57:27.836525  229767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:27.836738  229767 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:57:27.836993  229767 out.go:303] Setting JSON to false
	I0509 08:57:27.837031  229767 mustload.go:65] Loading cluster: embed-certs-20220509085727-6723
	I0509 08:57:27.839211  229767 out.go:177] * Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	I0509 08:57:27.840751  229767 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-20220509085727-6723 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (46.315586ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (74.893024ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (77.513596ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 85 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\""*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220509085727-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220509085727-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (91.603074ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "embed-certs-20220509085727-6723" does not exist
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:250: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220509085727-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (59.411062ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (80.207359ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (91.265821ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:264: status error: exit status 85 (may be ok)
start_stop_delete_test.go:266: expected host status after start-stop-start to be -"Running"- but got *"* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (45.70847ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (75.395ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220509085728-6723 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220509085728-6723 create -f testdata/busybox.yaml: exit status 1 (46.969288ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220509085728-6723" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context default-k8s-different-port-20220509085728-6723 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (46.033725ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (74.613389ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (45.979872ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (75.257966ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220509085727-6723" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (46.627704ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (77.789416ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20220509085727-6723" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220509085727-6723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context embed-certs-20220509085727-6723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (44.730169ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20220509085727-6723" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20220509085727-6723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (48.610014ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (70.719956ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220509085728-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220509085728-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (82.196097ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "default-k8s-different-port-20220509085728-6723" does not exist
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:209: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220509085728-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220509085728-6723 describe deploy/metrics-server -n kube-system

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220509085728-6723 describe deploy/metrics-server -n kube-system: exit status 1 (46.528701ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220509085728-6723" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-different-port-20220509085728-6723 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (48.846638ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (77.662985ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220509085727-6723 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p embed-certs-20220509085727-6723 "sudo crictl images -o json": exit status 85 (73.943894ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-linux-amd64 ssh -p embed-certs-20220509085727-6723 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:306: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"
start_stop_delete_test.go:306: v1.24.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.3-0",
- 	"k8s.gcr.io/kube-apiserver:v1.24.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.24.0",
- 	"k8s.gcr.io/kube-proxy:v1.24.0",
- 	"k8s.gcr.io/kube-scheduler:v1.24.0",
- 	"k8s.gcr.io/pause:3.7",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (51.914124ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (80.667442ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220509085728-6723 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220509085728-6723 --alsologtostderr -v=3: exit status 85 (84.169546ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:57:29.041776  230316 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:57:29.041922  230316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:29.041935  230316 out.go:309] Setting ErrFile to fd 2...
	I0509 08:57:29.041942  230316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:29.042096  230316 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:57:29.042330  230316 out.go:303] Setting JSON to false
	I0509 08:57:29.042372  230316 mustload.go:65] Loading cluster: default-k8s-different-port-20220509085728-6723
	I0509 08:57:29.044955  230316 out.go:177] * Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	I0509 08:57:29.046562  230316 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-different-port-20220509085728-6723 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (47.03868ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (79.217082ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220509085727-6723 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20220509085727-6723 --alsologtostderr -v=1: exit status 85 (81.637852ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:57:29.146343  230359 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:57:29.146522  230359 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:29.146657  230359 out.go:309] Setting ErrFile to fd 2...
	I0509 08:57:29.146671  230359 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:29.146822  230359 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:57:29.147048  230359 out.go:303] Setting JSON to false
	I0509 08:57:29.147074  230359 mustload.go:65] Loading cluster: embed-certs-20220509085727-6723
	I0509 08:57:29.149598  230359 out.go:177] * Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	I0509 08:57:29.152139  230359 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-linux-amd64 pause -p embed-certs-20220509085727-6723 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (50.060062ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (79.041141ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220509085727-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20220509085727-6723: exit status 1 (47.663109ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: embed-certs-20220509085727-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220509085727-6723 -n embed-certs-20220509085727-6723: exit status 85 (77.88751ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20220509085727-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20220509085727-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20220509085727-6723" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20220509085727-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20220509085727-6723\"")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (75.506774ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 85 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\""*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220509085728-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220509085728-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (80.268929ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "default-k8s-different-port-20220509085728-6723" does not exist
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:250: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220509085728-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (50.714286ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (77.04148ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723
start_stop_delete_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (69.205327ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:264: status error: exit status 85 (may be ok)
start_stop_delete_test.go:266: expected host status after start-stop-start to be -"Running"- but got *"* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (42.928311ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (73.678134ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220509085728-6723" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (44.416705ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (73.929698ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220509085728-6723" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220509085728-6723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220509085728-6723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (40.75543ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220509085728-6723" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20220509085728-6723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (51.944145ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (80.047227ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220509085728-6723 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220509085728-6723 "sudo crictl images -o json": exit status 85 (72.467752ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220509085728-6723 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:306: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"
start_stop_delete_test.go:306: v1.24.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.3-0",
- 	"k8s.gcr.io/kube-apiserver:v1.24.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.24.0",
- 	"k8s.gcr.io/kube-proxy:v1.24.0",
- 	"k8s.gcr.io/kube-scheduler:v1.24.0",
- 	"k8s.gcr.io/pause:3.7",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (46.056979ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (75.270832ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220509085728-6723 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-different-port-20220509085728-6723 --alsologtostderr -v=1: exit status 85 (73.82792ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:57:30.275156  230909 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:57:30.275271  230909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:30.275281  230909 out.go:309] Setting ErrFile to fd 2...
	I0509 08:57:30.275286  230909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:30.275403  230909 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:57:30.275555  230909 out.go:303] Setting JSON to false
	I0509 08:57:30.275573  230909 mustload.go:65] Loading cluster: default-k8s-different-port-20220509085728-6723
	I0509 08:57:30.278011  230909 out.go:177] * Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	I0509 08:57:30.279558  230909 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-linux-amd64 pause -p default-k8s-different-port-20220509085728-6723 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (48.570276ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (80.567937ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20220509085728-6723: exit status 1 (49.887416ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: default-k8s-different-port-20220509085728-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220509085728-6723 -n default-k8s-different-port-20220509085728-6723: exit status 85 (84.59563ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20220509085728-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220509085728-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220509085728-6723" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20220509085728-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20220509085728-6723\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220509085730-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220509085730-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (81.070508ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "newest-cni-20220509085730-6723" does not exist
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:209: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220509085730-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220509085730-6723

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220509085730-6723: exit status 1 (51.929125ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220509085730-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723: exit status 85 (76.06453ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220509085730-6723" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220509085730-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220509085730-6723\"")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220509085730-6723 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-20220509085730-6723 --alsologtostderr -v=3: exit status 85 (73.112761ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:57:30.639820  231081 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:57:30.639972  231081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:30.639983  231081 out.go:309] Setting ErrFile to fd 2...
	I0509 08:57:30.639990  231081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:30.640119  231081 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:57:30.640320  231081 out.go:303] Setting JSON to false
	I0509 08:57:30.640358  231081 mustload.go:65] Loading cluster: newest-cni-20220509085730-6723
	I0509 08:57:30.642791  231081 out.go:177] * Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	I0509 08:57:30.644198  231081 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-20220509085730-6723 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220509085730-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220509085730-6723: exit status 1 (51.108637ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220509085730-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723: exit status 85 (71.128107ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220509085730-6723" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220509085730-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220509085730-6723\"")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723: exit status 85 (77.214734ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 85 (may be ok)
start_stop_delete_test.go:243: expected post-stop host status to be -"Stopped"- but got *"* Profile \"newest-cni-20220509085730-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220509085730-6723\""*
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220509085730-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220509085730-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (76.070467ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "newest-cni-20220509085730-6723" does not exist
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:250: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220509085730-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220509085730-6723

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220509085730-6723: exit status 1 (50.771406ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220509085730-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723: exit status 85 (78.6338ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220509085730-6723" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220509085730-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220509085730-6723\"")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723
start_stop_delete_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723: exit status 85 (78.109976ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:264: status error: exit status 85 (may be ok)
start_stop_delete_test.go:266: expected host status after start-stop-start to be -"Running"- but got *"* Profile \"newest-cni-20220509085730-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220509085730-6723\""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220509085730-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220509085730-6723: exit status 1 (46.144551ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220509085730-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723: exit status 85 (75.36163ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220509085730-6723" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220509085730-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220509085730-6723\"")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220509085730-6723 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p newest-cni-20220509085730-6723 "sudo crictl images -o json": exit status 85 (80.33188ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-linux-amd64 ssh -p newest-cni-20220509085730-6723 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:306: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"
start_stop_delete_test.go:306: v1.24.1-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.3-0",
- 	"k8s.gcr.io/kube-apiserver:v1.24.1-rc.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.24.1-rc.0",
- 	"k8s.gcr.io/kube-proxy:v1.24.1-rc.0",
- 	"k8s.gcr.io/kube-scheduler:v1.24.1-rc.0",
- 	"k8s.gcr.io/pause:3.7",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220509085730-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220509085730-6723: exit status 1 (45.453742ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220509085730-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723: exit status 85 (85.960123ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220509085730-6723" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220509085730-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220509085730-6723\"")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (216.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220509085553-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p auto-20220509085553-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: exit status 90 (3m36.06397737s)

                                                
                                                
-- stdout --
	* [auto-20220509085553-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14070
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node auto-20220509085553-6723 in cluster auto-20220509085553-6723
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "auto-20220509085553-6723" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:57:31.536106  231521 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:57:31.536265  231521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:31.536278  231521 out.go:309] Setting ErrFile to fd 2...
	I0509 08:57:31.536285  231521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:31.536443  231521 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:57:31.536889  231521 out.go:303] Setting JSON to false
	I0509 08:57:31.538899  231521 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2406,"bootTime":1652084246,"procs":1069,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 08:57:31.538987  231521 start.go:125] virtualization: kvm guest
	I0509 08:57:31.541543  231521 out.go:177] * [auto-20220509085553-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0509 08:57:31.543928  231521 out.go:177]   - MINIKUBE_LOCATION=14070
	I0509 08:57:31.543864  231521 notify.go:193] Checking for updates...
	I0509 08:57:31.547070  231521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 08:57:31.548792  231521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:57:31.550322  231521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 08:57:31.552165  231521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0509 08:57:31.554552  231521 config.go:178] Loaded profile config "cert-expiration-20220509085554-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:57:31.554692  231521 config.go:178] Loaded profile config "old-k8s-version-20220509085700-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0509 08:57:31.554770  231521 driver.go:346] Setting default libvirt URI to qemu:///system
	I0509 08:57:31.601580  231521 docker.go:137] docker version: linux-20.10.15
	I0509 08:57:31.601700  231521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:57:31.725054  231521 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-09 08:57:31.633323559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:57:31.725264  231521 docker.go:254] overlay module found
	I0509 08:57:31.727877  231521 out.go:177] * Using the docker driver based on user configuration
	I0509 08:57:31.729406  231521 start.go:284] selected driver: docker
	I0509 08:57:31.729441  231521 start.go:801] validating driver "docker" against <nil>
	I0509 08:57:31.729477  231521 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0509 08:57:31.729552  231521 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0509 08:57:31.729583  231521 out.go:239] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0509 08:57:31.731233  231521 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0509 08:57:31.733619  231521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:57:31.854472  231521 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:48 SystemTime:2022-05-09 08:57:31.768220248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:57:31.854592  231521 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0509 08:57:31.854878  231521 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0509 08:57:31.856985  231521 out.go:177] * Using Docker driver with the root privilege
	I0509 08:57:31.858527  231521 cni.go:95] Creating CNI manager for ""
	I0509 08:57:31.858551  231521 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0509 08:57:31.858565  231521 start_flags.go:306] config:
	{Name:auto-20220509085553-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:auto-20220509085553-6723 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0509 08:57:31.860201  231521 out.go:177] * Starting control plane node auto-20220509085553-6723 in cluster auto-20220509085553-6723
	I0509 08:57:31.861605  231521 cache.go:120] Beginning downloading kic base image for docker with docker
	I0509 08:57:31.863173  231521 out.go:177] * Pulling base image ...
	I0509 08:57:31.864522  231521 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 08:57:31.864586  231521 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4
	I0509 08:57:31.864636  231521 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0509 08:57:31.864659  231521 cache.go:57] Caching tarball of preloaded images
	I0509 08:57:31.864933  231521 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0509 08:57:31.864954  231521 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.0 on docker
	I0509 08:57:31.865098  231521 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/auto-20220509085553-6723/config.json ...
	I0509 08:57:31.865127  231521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/auto-20220509085553-6723/config.json: {Name:mk6974671a113efbe962bf47cfca5e10012131a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 08:57:31.915440  231521 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0509 08:57:31.915473  231521 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0509 08:57:31.915490  231521 cache.go:206] Successfully downloaded all kic artifacts
	I0509 08:57:31.915520  231521 start.go:352] acquiring machines lock for auto-20220509085553-6723: {Name:mk50cd35aef9d23cf0160abdc583ff29ea3c9ed4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:57:31.915639  231521 start.go:356] acquired machines lock for "auto-20220509085553-6723" in 100.062µs
	I0509 08:57:31.915665  231521 start.go:91] Provisioning new machine with config: &{Name:auto-20220509085553-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:auto-20220509085553-6723 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse} &{Name: IP: Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0509 08:57:31.915753  231521 start.go:131] createHost starting for "" (driver="docker")
	I0509 08:57:31.918256  231521 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0509 08:57:31.918507  231521 start.go:165] libmachine.API.Create for "auto-20220509085553-6723" (driver="docker")
	I0509 08:57:31.918537  231521 client.go:168] LocalClient.Create starting
	I0509 08:57:31.918625  231521 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem
	I0509 08:57:31.918660  231521 main.go:134] libmachine: Decoding PEM data...
	I0509 08:57:31.918676  231521 main.go:134] libmachine: Parsing certificate...
	I0509 08:57:31.918757  231521 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem
	I0509 08:57:31.918777  231521 main.go:134] libmachine: Decoding PEM data...
	I0509 08:57:31.918790  231521 main.go:134] libmachine: Parsing certificate...
	I0509 08:57:31.919131  231521 cli_runner.go:164] Run: docker network inspect auto-20220509085553-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0509 08:57:31.955249  231521 cli_runner.go:211] docker network inspect auto-20220509085553-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0509 08:57:31.955334  231521 network_create.go:272] running [docker network inspect auto-20220509085553-6723] to gather additional debugging logs...
	I0509 08:57:31.955367  231521 cli_runner.go:164] Run: docker network inspect auto-20220509085553-6723
	W0509 08:57:31.997454  231521 cli_runner.go:211] docker network inspect auto-20220509085553-6723 returned with exit code 1
	I0509 08:57:31.997491  231521 network_create.go:275] error running [docker network inspect auto-20220509085553-6723]: docker network inspect auto-20220509085553-6723: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220509085553-6723
	I0509 08:57:31.997509  231521 network_create.go:277] output of [docker network inspect auto-20220509085553-6723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220509085553-6723
	
	** /stderr **
	I0509 08:57:31.997573  231521 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0509 08:57:32.036101  231521 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000103b8] misses:0}
	I0509 08:57:32.036164  231521 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0509 08:57:32.036183  231521 network_create.go:115] attempt to create docker network auto-20220509085553-6723 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0509 08:57:32.036241  231521 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220509085553-6723
	I0509 08:57:32.111841  231521 network_create.go:99] docker network auto-20220509085553-6723 192.168.49.0/24 created
	I0509 08:57:32.111895  231521 kic.go:106] calculated static IP "192.168.49.2" for the "auto-20220509085553-6723" container
	I0509 08:57:32.112016  231521 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0509 08:57:32.150493  231521 cli_runner.go:164] Run: docker volume create auto-20220509085553-6723 --label name.minikube.sigs.k8s.io=auto-20220509085553-6723 --label created_by.minikube.sigs.k8s.io=true
	I0509 08:57:32.192214  231521 oci.go:103] Successfully created a docker volume auto-20220509085553-6723
	I0509 08:57:32.192317  231521 cli_runner.go:164] Run: docker run --rm --name auto-20220509085553-6723-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220509085553-6723 --entrypoint /usr/bin/test -v auto-20220509085553-6723:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0509 08:57:32.930887  231521 oci.go:107] Successfully prepared a docker volume auto-20220509085553-6723
	I0509 08:57:32.930940  231521 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 08:57:32.930966  231521 kic.go:179] Starting extracting preloaded images to volume ...
	I0509 08:57:32.931029  231521 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220509085553-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0509 08:57:37.718525  231521 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220509085553-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.787430973s)
	I0509 08:57:37.718565  231521 kic.go:188] duration metric: took 4.787595 seconds to extract preloaded images to volume
	W0509 08:57:37.718600  231521 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0509 08:57:37.718614  231521 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0509 08:57:37.718674  231521 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0509 08:57:37.884308  231521 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220509085553-6723 --name auto-20220509085553-6723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220509085553-6723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220509085553-6723 --network auto-20220509085553-6723 --ip 192.168.49.2 --volume auto-20220509085553-6723:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	W0509 08:57:37.961795  231521 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220509085553-6723 --name auto-20220509085553-6723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220509085553-6723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220509085553-6723 --network auto-20220509085553-6723 --ip 192.168.49.2 --volume auto-20220509085553-6723:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 returned with exit code 125
	I0509 08:57:37.961874  231521 client.go:171] LocalClient.Create took 6.043327292s
	I0509 08:57:39.962120  231521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 08:57:39.962220  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	W0509 08:57:40.005492  231521 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723 returned with exit code 1
	I0509 08:57:40.005646  231521 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:57:40.282027  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	W0509 08:57:40.315390  231521 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723 returned with exit code 1
	I0509 08:57:40.315483  231521 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:57:40.856180  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	W0509 08:57:40.892575  231521 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723 returned with exit code 1
	I0509 08:57:40.892716  231521 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:57:41.548645  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	W0509 08:57:41.728089  231521 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723 returned with exit code 1
	W0509 08:57:41.728194  231521 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0509 08:57:41.728211  231521 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:57:41.728246  231521 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0509 08:57:41.728274  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	W0509 08:57:41.761745  231521 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723 returned with exit code 1
	I0509 08:57:41.761862  231521 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:57:41.994250  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	W0509 08:57:42.029159  231521 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723 returned with exit code 1
	I0509 08:57:42.029262  231521 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:57:42.474951  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	W0509 08:57:42.512528  231521 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723 returned with exit code 1
	I0509 08:57:42.512668  231521 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:57:42.831185  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	W0509 08:57:42.865085  231521 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723 returned with exit code 1
	I0509 08:57:42.865217  231521 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:57:43.420143  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	W0509 08:57:43.455477  231521 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723 returned with exit code 1
	W0509 08:57:43.455582  231521 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0509 08:57:43.455598  231521 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0509 08:57:43.455605  231521 start.go:134] duration metric: createHost completed in 11.539845727s
	I0509 08:57:43.455614  231521 start.go:81] releasing machines lock for "auto-20220509085553-6723", held for 11.539964351s
	W0509 08:57:43.455645  231521 start.go:576] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220509085553-6723 --name auto-20220509085553-6723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220509085553-6723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220509085553-6723 --network auto-20220509085553-6723 --ip 192.168.49.2 --volume auto-20220509085553-6723:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
	stdout:
	e6ae5d4be89bd0a1964f5972b60e6ea6d590ce6053dd318a6bb2164f448bd327
	
	stderr:
	docker: Error response from daemon: network auto-20220509085553-6723 not found.
	I0509 08:57:43.456074  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	W0509 08:57:43.490600  231521 start.go:581] delete host: Docker machine "auto-20220509085553-6723" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0509 08:57:43.490791  231521 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220509085553-6723 --name auto-20220509085553-6723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220509085553-6723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220509085553-6723 --network auto-20220509085553-6723 --ip 192.168.49.2 --volume auto-20220509085553-6723:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
	stdout:
	e6ae5d4be89bd0a1964f5972b60e6ea6d590ce6053dd318a6bb2164f448bd327
	
	stderr:
	docker: Error response from daemon: network auto-20220509085553-6723 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220509085553-6723 --name auto-20220509085553-6723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220509085553-6723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220509085553-6723 --network auto-20220509085553-6723 --ip 192.168.49.2 --volume auto-20220509085553-6723:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
	stdout:
	e6ae5d4be89bd0a1964f5972b60e6ea6d590ce6053dd318a6bb2164f448bd327
	
	stderr:
	docker: Error response from daemon: network auto-20220509085553-6723 not found.
	
	I0509 08:57:43.490808  231521 start.go:591] Will try again in 5 seconds ...
	I0509 08:57:48.492331  231521 start.go:352] acquiring machines lock for auto-20220509085553-6723: {Name:mk50cd35aef9d23cf0160abdc583ff29ea3c9ed4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 08:57:48.492475  231521 start.go:356] acquired machines lock for "auto-20220509085553-6723" in 93.616µs
	I0509 08:57:48.492501  231521 start.go:94] Skipping create...Using existing machine configuration
	I0509 08:57:48.492514  231521 fix.go:55] fixHost starting: 
	I0509 08:57:48.492892  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:57:48.527095  231521 fix.go:103] recreateIfNeeded on auto-20220509085553-6723: state= err=<nil>
	I0509 08:57:48.527142  231521 fix.go:108] machineExists: false. err=machine does not exist
	I0509 08:57:48.529391  231521 out.go:177] * docker "auto-20220509085553-6723" container is missing, will recreate.
	I0509 08:57:48.530917  231521 delete.go:124] DEMOLISHING auto-20220509085553-6723 ...
	I0509 08:57:48.530993  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:57:48.566637  231521 stop.go:79] host is in state 
	I0509 08:57:48.566700  231521 main.go:134] libmachine: Stopping "auto-20220509085553-6723"...
	I0509 08:57:48.566761  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:57:48.602562  231521 kic_runner.go:93] Run: systemctl --version
	I0509 08:57:48.602594  231521 kic_runner.go:114] Args: [docker exec --privileged auto-20220509085553-6723 systemctl --version]
	I0509 08:57:48.639205  231521 kic_runner.go:93] Run: sudo service kubelet stop
	I0509 08:57:48.639231  231521 kic_runner.go:114] Args: [docker exec --privileged auto-20220509085553-6723 sudo service kubelet stop]
	I0509 08:57:48.684463  231521 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container e6ae5d4be89bd0a1964f5972b60e6ea6d590ce6053dd318a6bb2164f448bd327 is not running
	
	** /stderr **
	W0509 08:57:48.684490  231521 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container e6ae5d4be89bd0a1964f5972b60e6ea6d590ce6053dd318a6bb2164f448bd327 is not running
	I0509 08:57:48.684551  231521 kic_runner.go:93] Run: sudo service kubelet stop
	I0509 08:57:48.684569  231521 kic_runner.go:114] Args: [docker exec --privileged auto-20220509085553-6723 sudo service kubelet stop]
	I0509 08:57:48.720583  231521 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container e6ae5d4be89bd0a1964f5972b60e6ea6d590ce6053dd318a6bb2164f448bd327 is not running
	
	** /stderr **
	W0509 08:57:48.720623  231521 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container e6ae5d4be89bd0a1964f5972b60e6ea6d590ce6053dd318a6bb2164f448bd327 is not running
	I0509 08:57:48.720711  231521 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0509 08:57:48.720723  231521 kic_runner.go:114] Args: [docker exec --privileged auto-20220509085553-6723 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0509 08:57:48.756456  231521 kic.go:452] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container e6ae5d4be89bd0a1964f5972b60e6ea6d590ce6053dd318a6bb2164f448bd327 is not running
	I0509 08:57:48.756489  231521 kic.go:462] successfully stopped kubernetes!
	I0509 08:57:48.756543  231521 kic_runner.go:93] Run: pgrep kube-apiserver
	I0509 08:57:48.756562  231521 kic_runner.go:114] Args: [docker exec --privileged auto-20220509085553-6723 pgrep kube-apiserver]
	I0509 08:57:48.830438  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:57:51.868757  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:57:54.904755  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:57:57.939373  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:00.973876  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:04.017159  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:07.051434  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:10.087991  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:13.126099  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:16.161860  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:19.199536  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:22.237824  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:25.273869  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:28.309301  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:31.344919  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:34.383338  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:37.419574  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:40.457725  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:43.496257  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:46.532799  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:49.569907  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:52.609871  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:55.645854  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:58:58.680774  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:01.725551  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:04.761866  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:07.804092  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:10.840110  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:13.877347  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:16.910116  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:19.944808  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:22.981815  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:26.026780  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:29.064527  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:32.122871  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:35.172761  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:38.224752  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:41.320792  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:44.360944  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:47.396794  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:50.432458  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:53.468827  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:56.512775  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 08:59:59.548808  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:02.595100  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:05.630511  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:08.665275  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:11.699314  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:14.744241  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:17.796775  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:20.836741  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:23.875799  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:26.911230  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:29.952775  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:32.988779  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:36.032788  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:39.073893  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:42.110797  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:45.149990  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:48.188138  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:51.224166  231521 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0509 09:00:51.224219  231521 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0509 09:00:51.224764  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	W0509 09:00:51.258241  231521 delete.go:135] deletehost failed: Docker machine "auto-20220509085553-6723" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0509 09:00:51.258352  231521 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220509085553-6723
	I0509 09:00:51.291943  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:51.326028  231521 cli_runner.go:164] Run: docker exec --privileged -t auto-20220509085553-6723 /bin/bash -c "sudo init 0"
	W0509 09:00:51.363678  231521 cli_runner.go:211] docker exec --privileged -t auto-20220509085553-6723 /bin/bash -c "sudo init 0" returned with exit code 1
	I0509 09:00:51.363725  231521 oci.go:657] error shutdown auto-20220509085553-6723: docker exec --privileged -t auto-20220509085553-6723 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container e6ae5d4be89bd0a1964f5972b60e6ea6d590ce6053dd318a6bb2164f448bd327 is not running
	I0509 09:00:52.364021  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:00:52.402289  231521 oci.go:671] temporary error: container auto-20220509085553-6723 status is  but expect it to be exited
	I0509 09:00:52.402323  231521 oci.go:677] Successfully shutdown container auto-20220509085553-6723
	I0509 09:00:52.402375  231521 cli_runner.go:164] Run: docker rm -f -v auto-20220509085553-6723
	I0509 09:00:52.442557  231521 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220509085553-6723
	W0509 09:00:52.475142  231521 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220509085553-6723 returned with exit code 1
	I0509 09:00:52.475218  231521 cli_runner.go:164] Run: docker network inspect auto-20220509085553-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0509 09:00:52.508788  231521 cli_runner.go:211] docker network inspect auto-20220509085553-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0509 09:00:52.508868  231521 network_create.go:272] running [docker network inspect auto-20220509085553-6723] to gather additional debugging logs...
	I0509 09:00:52.508887  231521 cli_runner.go:164] Run: docker network inspect auto-20220509085553-6723
	W0509 09:00:52.542173  231521 cli_runner.go:211] docker network inspect auto-20220509085553-6723 returned with exit code 1
	I0509 09:00:52.542206  231521 network_create.go:275] error running [docker network inspect auto-20220509085553-6723]: docker network inspect auto-20220509085553-6723: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220509085553-6723
	I0509 09:00:52.542224  231521 network_create.go:277] output of [docker network inspect auto-20220509085553-6723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220509085553-6723
	
	** /stderr **
	W0509 09:00:52.542374  231521 delete.go:139] delete failed (probably ok) <nil>
	I0509 09:00:52.542388  231521 fix.go:115] Sleeping 1 second for extra luck!
	I0509 09:00:53.542494  231521 start.go:131] createHost starting for "" (driver="docker")
	I0509 09:00:53.545577  231521 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0509 09:00:53.545716  231521 start.go:165] libmachine.API.Create for "auto-20220509085553-6723" (driver="docker")
	I0509 09:00:53.545756  231521 client.go:168] LocalClient.Create starting
	I0509 09:00:53.545850  231521 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem
	I0509 09:00:53.545884  231521 main.go:134] libmachine: Decoding PEM data...
	I0509 09:00:53.545904  231521 main.go:134] libmachine: Parsing certificate...
	I0509 09:00:53.545962  231521 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem
	I0509 09:00:53.545979  231521 main.go:134] libmachine: Decoding PEM data...
	I0509 09:00:53.545994  231521 main.go:134] libmachine: Parsing certificate...
	I0509 09:00:53.546224  231521 cli_runner.go:164] Run: docker network inspect auto-20220509085553-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0509 09:00:53.591414  231521 cli_runner.go:211] docker network inspect auto-20220509085553-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0509 09:00:53.591503  231521 network_create.go:272] running [docker network inspect auto-20220509085553-6723] to gather additional debugging logs...
	I0509 09:00:53.591530  231521 cli_runner.go:164] Run: docker network inspect auto-20220509085553-6723
	W0509 09:00:53.627717  231521 cli_runner.go:211] docker network inspect auto-20220509085553-6723 returned with exit code 1
	I0509 09:00:53.627751  231521 network_create.go:275] error running [docker network inspect auto-20220509085553-6723]: docker network inspect auto-20220509085553-6723: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220509085553-6723
	I0509 09:00:53.627774  231521 network_create.go:277] output of [docker network inspect auto-20220509085553-6723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220509085553-6723
	
	** /stderr **
	I0509 09:00:53.627819  231521 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0509 09:00:53.662274  231521 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-7685b53298ed IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:d5:12:05}}
	I0509 09:00:53.663083  231521 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-919073daf1df IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:93:5b:71:36}}
	I0509 09:00:53.663864  231521 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000103b8 192.168.67.0:0xc000010d30] misses:0}
	I0509 09:00:53.663904  231521 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0509 09:00:53.663921  231521 network_create.go:115] attempt to create docker network auto-20220509085553-6723 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0509 09:00:53.663979  231521 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220509085553-6723
	I0509 09:00:53.747764  231521 network_create.go:99] docker network auto-20220509085553-6723 192.168.67.0/24 created
	I0509 09:00:53.747847  231521 kic.go:106] calculated static IP "192.168.67.2" for the "auto-20220509085553-6723" container
	I0509 09:00:53.747905  231521 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0509 09:00:53.795304  231521 cli_runner.go:164] Run: docker volume create auto-20220509085553-6723 --label name.minikube.sigs.k8s.io=auto-20220509085553-6723 --label created_by.minikube.sigs.k8s.io=true
	I0509 09:00:53.831492  231521 oci.go:103] Successfully created a docker volume auto-20220509085553-6723
	I0509 09:00:53.831592  231521 cli_runner.go:164] Run: docker run --rm --name auto-20220509085553-6723-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220509085553-6723 --entrypoint /usr/bin/test -v auto-20220509085553-6723:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0509 09:00:54.348042  231521 oci.go:107] Successfully prepared a docker volume auto-20220509085553-6723
	I0509 09:00:54.348094  231521 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 09:00:54.348118  231521 kic.go:179] Starting extracting preloaded images to volume ...
	I0509 09:00:54.348184  231521 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220509085553-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0509 09:01:01.453547  231521 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220509085553-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (7.10528405s)
	I0509 09:01:01.453590  231521 kic.go:188] duration metric: took 7.105469 seconds to extract preloaded images to volume
	W0509 09:01:01.453648  231521 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0509 09:01:01.453666  231521 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0509 09:01:01.453732  231521 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0509 09:01:01.600230  231521 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220509085553-6723 --name auto-20220509085553-6723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220509085553-6723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220509085553-6723 --network auto-20220509085553-6723 --ip 192.168.67.2 --volume auto-20220509085553-6723:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0509 09:01:02.101794  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Running}}
	I0509 09:01:02.143310  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:01:02.181444  231521 cli_runner.go:164] Run: docker exec auto-20220509085553-6723 stat /var/lib/dpkg/alternatives/iptables
	I0509 09:01:02.255285  231521 oci.go:279] the created container "auto-20220509085553-6723" has a running status.
	I0509 09:01:02.255312  231521 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/auto-20220509085553-6723/id_rsa...
	I0509 09:01:02.616364  231521 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/auto-20220509085553-6723/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0509 09:01:02.712152  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:01:02.749751  231521 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0509 09:01:02.749775  231521 kic_runner.go:114] Args: [docker exec --privileged auto-20220509085553-6723 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0509 09:01:02.831697  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	I0509 09:01:02.870983  231521 machine.go:88] provisioning docker machine ...
	I0509 09:01:02.871036  231521 ubuntu.go:169] provisioning hostname "auto-20220509085553-6723"
	I0509 09:01:02.871090  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:02.907016  231521 main.go:134] libmachine: Using SSH client type: native
	I0509 09:01:02.907228  231521 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49394 <nil> <nil>}
	I0509 09:01:02.907248  231521 main.go:134] libmachine: About to run SSH command:
	sudo hostname auto-20220509085553-6723 && echo "auto-20220509085553-6723" | sudo tee /etc/hostname
	I0509 09:01:03.038695  231521 main.go:134] libmachine: SSH cmd err, output: <nil>: auto-20220509085553-6723
	
	I0509 09:01:03.038769  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:03.077199  231521 main.go:134] libmachine: Using SSH client type: native
	I0509 09:01:03.077371  231521 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49394 <nil> <nil>}
	I0509 09:01:03.077399  231521 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20220509085553-6723' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20220509085553-6723/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20220509085553-6723' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0509 09:01:03.201032  231521 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0509 09:01:03.201065  231521 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube}
	I0509 09:01:03.201087  231521 ubuntu.go:177] setting up certificates
	I0509 09:01:03.201100  231521 provision.go:83] configureAuth start
	I0509 09:01:03.201150  231521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220509085553-6723
	I0509 09:01:03.236096  231521 provision.go:138] copyHostCerts
	I0509 09:01:03.236168  231521 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem, removing ...
	I0509 09:01:03.236180  231521 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem
	I0509 09:01:03.236257  231521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem (1078 bytes)
	I0509 09:01:03.236359  231521 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem, removing ...
	I0509 09:01:03.236375  231521 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem
	I0509 09:01:03.236420  231521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem (1123 bytes)
	I0509 09:01:03.236530  231521 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem, removing ...
	I0509 09:01:03.236546  231521 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem
	I0509 09:01:03.236579  231521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem (1679 bytes)
	I0509 09:01:03.236697  231521 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem org=jenkins.auto-20220509085553-6723 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20220509085553-6723]
	I0509 09:01:03.362147  231521 provision.go:172] copyRemoteCerts
	I0509 09:01:03.362209  231521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0509 09:01:03.362239  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:03.400435  231521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/auto-20220509085553-6723/id_rsa Username:docker}
	I0509 09:01:03.490168  231521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0509 09:01:03.515428  231521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0509 09:01:03.534853  231521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0509 09:01:03.556869  231521 provision.go:86] duration metric: configureAuth took 355.75598ms
	I0509 09:01:03.556903  231521 ubuntu.go:193] setting minikube options for container-runtime
	I0509 09:01:03.557085  231521 config.go:178] Loaded profile config "auto-20220509085553-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 09:01:03.557143  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:03.593379  231521 main.go:134] libmachine: Using SSH client type: native
	I0509 09:01:03.593573  231521 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49394 <nil> <nil>}
	I0509 09:01:03.593596  231521 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0509 09:01:03.718097  231521 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0509 09:01:03.718124  231521 ubuntu.go:71] root file system type: overlay
	I0509 09:01:03.718333  231521 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0509 09:01:03.718401  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:03.757846  231521 main.go:134] libmachine: Using SSH client type: native
	I0509 09:01:03.758007  231521 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49394 <nil> <nil>}
	I0509 09:01:03.758095  231521 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0509 09:01:03.905035  231521 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0509 09:01:03.905116  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:03.939733  231521 main.go:134] libmachine: Using SSH client type: native
	I0509 09:01:03.939882  231521 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49394 <nil> <nil>}
	I0509 09:01:03.939903  231521 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0509 09:01:04.705392  231521 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-09 09:01:03.897087059 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0509 09:01:04.705426  231521 machine.go:91] provisioned docker machine in 1.834415294s
	I0509 09:01:04.705438  231521 client.go:171] LocalClient.Create took 11.159672476s
	I0509 09:01:04.705456  231521 start.go:173] duration metric: libmachine.API.Create for "auto-20220509085553-6723" took 11.159740143s
	I0509 09:01:04.705467  231521 start.go:306] post-start starting for "auto-20220509085553-6723" (driver="docker")
	I0509 09:01:04.705473  231521 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0509 09:01:04.705533  231521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0509 09:01:04.705572  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:04.740011  231521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/auto-20220509085553-6723/id_rsa Username:docker}
	I0509 09:01:04.833833  231521 ssh_runner.go:195] Run: cat /etc/os-release
	I0509 09:01:04.837192  231521 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0509 09:01:04.837225  231521 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0509 09:01:04.837235  231521 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0509 09:01:04.837241  231521 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0509 09:01:04.837250  231521 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/addons for local assets ...
	I0509 09:01:04.837315  231521 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files for local assets ...
	I0509 09:01:04.837388  231521 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem -> 67232.pem in /etc/ssl/certs
	I0509 09:01:04.837487  231521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0509 09:01:04.845167  231521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem --> /etc/ssl/certs/67232.pem (1708 bytes)
	I0509 09:01:04.865327  231521 start.go:309] post-start completed in 159.844317ms
	I0509 09:01:04.865778  231521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220509085553-6723
	I0509 09:01:04.902647  231521 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/auto-20220509085553-6723/config.json ...
	I0509 09:01:04.902846  231521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 09:01:04.902888  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:04.939999  231521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/auto-20220509085553-6723/id_rsa Username:docker}
	I0509 09:01:05.029559  231521 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0509 09:01:05.034316  231521 start.go:134] duration metric: createHost completed in 11.491785253s
	I0509 09:01:05.034427  231521 cli_runner.go:164] Run: docker container inspect auto-20220509085553-6723 --format={{.State.Status}}
	W0509 09:01:05.078478  231521 fix.go:129] unexpected machine state, will restart: <nil>
	I0509 09:01:05.078514  231521 machine.go:88] provisioning docker machine ...
	I0509 09:01:05.078535  231521 ubuntu.go:169] provisioning hostname "auto-20220509085553-6723"
	I0509 09:01:05.078601  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:05.119254  231521 main.go:134] libmachine: Using SSH client type: native
	I0509 09:01:05.119462  231521 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49394 <nil> <nil>}
	I0509 09:01:05.119487  231521 main.go:134] libmachine: About to run SSH command:
	sudo hostname auto-20220509085553-6723 && echo "auto-20220509085553-6723" | sudo tee /etc/hostname
	I0509 09:01:05.257800  231521 main.go:134] libmachine: SSH cmd err, output: <nil>: auto-20220509085553-6723
	
	I0509 09:01:05.257892  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:05.297493  231521 main.go:134] libmachine: Using SSH client type: native
	I0509 09:01:05.297634  231521 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49394 <nil> <nil>}
	I0509 09:01:05.297654  231521 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20220509085553-6723' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20220509085553-6723/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20220509085553-6723' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0509 09:01:05.421203  231521 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0509 09:01:05.421246  231521 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube}
	I0509 09:01:05.421269  231521 ubuntu.go:177] setting up certificates
	I0509 09:01:05.421280  231521 provision.go:83] configureAuth start
	I0509 09:01:05.421328  231521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220509085553-6723
	I0509 09:01:05.455251  231521 provision.go:138] copyHostCerts
	I0509 09:01:05.455320  231521 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem, removing ...
	I0509 09:01:05.455332  231521 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem
	I0509 09:01:05.455417  231521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem (1078 bytes)
	I0509 09:01:05.455577  231521 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem, removing ...
	I0509 09:01:05.455595  231521 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem
	I0509 09:01:05.455640  231521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem (1123 bytes)
	I0509 09:01:05.455709  231521 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem, removing ...
	I0509 09:01:05.455724  231521 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem
	I0509 09:01:05.455759  231521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem (1679 bytes)
	I0509 09:01:05.455826  231521 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem org=jenkins.auto-20220509085553-6723 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20220509085553-6723]
	I0509 09:01:05.792500  231521 provision.go:172] copyRemoteCerts
	I0509 09:01:05.792563  231521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0509 09:01:05.792595  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:05.828321  231521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/auto-20220509085553-6723/id_rsa Username:docker}
	I0509 09:01:05.921026  231521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0509 09:01:05.940457  231521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
	I0509 09:01:05.960297  231521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0509 09:01:05.983359  231521 provision.go:86] duration metric: configureAuth took 562.06851ms
	I0509 09:01:05.983387  231521 ubuntu.go:193] setting minikube options for container-runtime
	I0509 09:01:05.983562  231521 config.go:178] Loaded profile config "auto-20220509085553-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 09:01:05.983628  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:06.020103  231521 main.go:134] libmachine: Using SSH client type: native
	I0509 09:01:06.020246  231521 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49394 <nil> <nil>}
	I0509 09:01:06.020260  231521 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0509 09:01:06.140870  231521 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0509 09:01:06.140898  231521 ubuntu.go:71] root file system type: overlay
	I0509 09:01:06.141064  231521 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0509 09:01:06.141127  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:06.176558  231521 main.go:134] libmachine: Using SSH client type: native
	I0509 09:01:06.176766  231521 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49394 <nil> <nil>}
	I0509 09:01:06.176873  231521 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0509 09:01:06.310372  231521 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0509 09:01:06.310449  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:06.345808  231521 main.go:134] libmachine: Using SSH client type: native
	I0509 09:01:06.345972  231521 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49394 <nil> <nil>}
	I0509 09:01:06.346001  231521 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0509 09:01:06.477054  231521 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0509 09:01:06.477078  231521 machine.go:91] provisioned docker machine in 1.398557724s
	I0509 09:01:06.477089  231521 start.go:306] post-start starting for "auto-20220509085553-6723" (driver="docker")
	I0509 09:01:06.477097  231521 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0509 09:01:06.477154  231521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0509 09:01:06.477203  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:06.512185  231521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/auto-20220509085553-6723/id_rsa Username:docker}
	I0509 09:01:06.601250  231521 ssh_runner.go:195] Run: cat /etc/os-release
	I0509 09:01:06.604708  231521 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0509 09:01:06.604754  231521 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0509 09:01:06.604768  231521 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0509 09:01:06.604776  231521 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0509 09:01:06.604788  231521 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/addons for local assets ...
	I0509 09:01:06.604847  231521 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files for local assets ...
	I0509 09:01:06.605036  231521 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem -> 67232.pem in /etc/ssl/certs
	I0509 09:01:06.605129  231521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0509 09:01:06.613106  231521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem --> /etc/ssl/certs/67232.pem (1708 bytes)
	I0509 09:01:06.633202  231521 start.go:309] post-start completed in 156.099017ms
	I0509 09:01:06.633277  231521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 09:01:06.633320  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:06.669077  231521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/auto-20220509085553-6723/id_rsa Username:docker}
	I0509 09:01:06.753341  231521 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0509 09:01:06.757502  231521 fix.go:57] fixHost completed within 3m18.264987657s
	I0509 09:01:06.757529  231521 start.go:81] releasing machines lock for "auto-20220509085553-6723", held for 3m18.265039211s
	I0509 09:01:06.757609  231521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220509085553-6723
	I0509 09:01:06.791498  231521 ssh_runner.go:195] Run: sudo service containerd status
	I0509 09:01:06.791546  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:06.791556  231521 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0509 09:01:06.791618  231521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220509085553-6723
	I0509 09:01:06.829926  231521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/auto-20220509085553-6723/id_rsa Username:docker}
	I0509 09:01:06.832446  231521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49394 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/auto-20220509085553-6723/id_rsa Username:docker}
	I0509 09:01:06.936704  231521 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0509 09:01:06.946875  231521 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0509 09:01:06.946929  231521 ssh_runner.go:195] Run: sudo service crio status
	I0509 09:01:06.969692  231521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0509 09:01:06.985219  231521 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0509 09:01:06.995931  231521 ssh_runner.go:195] Run: sudo service docker status
	I0509 09:01:07.016956  231521 ssh_runner.go:195] Run: sudo service cri-docker.socket status
	I0509 09:01:07.033705  231521 ssh_runner.go:195] Run: sudo service cri-docker.socket start
	I0509 09:01:07.511901  231521 openrc.go:111] start output: 
	** stderr ** 
	Failed to start cri-docker.socket.service: Unit cri-docker.socket.service not found.
	
	** /stderr **
	I0509 09:01:07.514497  231521 out.go:177] 
	W0509 09:01:07.516226  231521 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo service cri-docker.socket start: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start cri-docker.socket.service: Unit cri-docker.socket.service not found.
	
	X Exiting due to RUNTIME_ENABLE: sudo service cri-docker.socket start: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start cri-docker.socket.service: Unit cri-docker.socket.service not found.
	
	W0509 09:01:07.516255  231521 out.go:239] * 
	* 
	W0509 09:01:07.517113  231521 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0509 09:01:07.519442  231521 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/auto/Start (216.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220509085730-6723 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-20220509085730-6723 --alsologtostderr -v=1: exit status 85 (74.771765ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:57:31.602899  231552 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:57:31.603034  231552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:31.603044  231552 out.go:309] Setting ErrFile to fd 2...
	I0509 08:57:31.603049  231552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:57:31.603168  231552 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:57:31.603349  231552 out.go:303] Setting JSON to false
	I0509 08:57:31.603369  231552 mustload.go:65] Loading cluster: newest-cni-20220509085730-6723
	I0509 08:57:31.606066  231552 out.go:177] * Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	I0509 08:57:31.607973  231552 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-linux-amd64 pause -p newest-cni-20220509085730-6723 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220509085730-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220509085730-6723: exit status 1 (45.775307ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220509085730-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723: exit status 85 (71.279822ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220509085730-6723" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220509085730-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220509085730-6723\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220509085730-6723
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20220509085730-6723: exit status 1 (49.24254ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: newest-cni-20220509085730-6723

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220509085730-6723 -n newest-cni-20220509085730-6723: exit status 85 (72.158718ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20220509085730-6723" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20220509085730-6723"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20220509085730-6723" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20220509085730-6723\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20220509085730-6723\"")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (58.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0509 09:00:54.381581    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.144486461s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.18459249s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.190803787s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.148991603s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.142321288s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.155867707s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0509 09:01:34.826321    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.15559855s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (58.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (521.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220509085554-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220509085554-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (8m41.32401335s)

                                                
                                                
-- stdout --
	* [calico-20220509085554-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14070
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node calico-20220509085554-6723 in cluster calico-20220509085554-6723
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.24.0 on Docker 20.10.13 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 09:01:52.833402  277284 out.go:296] Setting OutFile to fd 1 ...
	I0509 09:01:52.833517  277284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 09:01:52.833526  277284 out.go:309] Setting ErrFile to fd 2...
	I0509 09:01:52.833531  277284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 09:01:52.833660  277284 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 09:01:52.833981  277284 out.go:303] Setting JSON to false
	I0509 09:01:52.835731  277284 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2667,"bootTime":1652084246,"procs":760,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 09:01:52.835816  277284 start.go:125] virtualization: kvm guest
	I0509 09:01:52.838910  277284 out.go:177] * [calico-20220509085554-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0509 09:01:52.841022  277284 out.go:177]   - MINIKUBE_LOCATION=14070
	I0509 09:01:52.840969  277284 notify.go:193] Checking for updates...
	I0509 09:01:52.844793  277284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 09:01:52.846604  277284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 09:01:52.848762  277284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 09:01:52.850631  277284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0509 09:01:52.853710  277284 config.go:178] Loaded profile config "cilium-20220509085554-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 09:01:52.853863  277284 config.go:178] Loaded profile config "false-20220509085554-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 09:01:52.854014  277284 config.go:178] Loaded profile config "old-k8s-version-20220509085700-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0509 09:01:52.854077  277284 driver.go:346] Setting default libvirt URI to qemu:///system
	I0509 09:01:52.903921  277284 docker.go:137] docker version: linux-20.10.15
	I0509 09:01:52.904061  277284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 09:01:53.028795  277284 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-09 09:01:52.936250357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 09:01:53.028931  277284 docker.go:254] overlay module found
	I0509 09:01:53.031341  277284 out.go:177] * Using the docker driver based on user configuration
	I0509 09:01:53.033128  277284 start.go:284] selected driver: docker
	I0509 09:01:53.033155  277284 start.go:801] validating driver "docker" against <nil>
	I0509 09:01:53.033181  277284 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0509 09:01:53.033242  277284 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0509 09:01:53.033262  277284 out.go:239] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0509 09:01:53.035044  277284 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0509 09:01:53.037437  277284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 09:01:53.173054  277284 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-09 09:01:53.08202814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 09:01:53.173280  277284 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0509 09:01:53.173469  277284 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0509 09:01:53.175707  277284 out.go:177] * Using Docker driver with the root privilege
	I0509 09:01:53.177342  277284 cni.go:95] Creating CNI manager for "calico"
	I0509 09:01:53.177365  277284 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0509 09:01:53.177386  277284 start_flags.go:306] config:
	{Name:calico-20220509085554-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:calico-20220509085554-6723 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0509 09:01:53.179476  277284 out.go:177] * Starting control plane node calico-20220509085554-6723 in cluster calico-20220509085554-6723
	I0509 09:01:53.181014  277284 cache.go:120] Beginning downloading kic base image for docker with docker
	I0509 09:01:53.182655  277284 out.go:177] * Pulling base image ...
	I0509 09:01:53.184169  277284 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 09:01:53.184224  277284 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4
	I0509 09:01:53.184240  277284 cache.go:57] Caching tarball of preloaded images
	I0509 09:01:53.184312  277284 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0509 09:01:53.184510  277284 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0509 09:01:53.184531  277284 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.0 on docker
	I0509 09:01:53.184727  277284 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/config.json ...
	I0509 09:01:53.184760  277284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/config.json: {Name:mkf6b2140a32b1374881531fb1dc7157ab2be436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 09:01:53.233156  277284 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0509 09:01:53.233197  277284 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0509 09:01:53.233207  277284 cache.go:206] Successfully downloaded all kic artifacts
	I0509 09:01:53.233236  277284 start.go:352] acquiring machines lock for calico-20220509085554-6723: {Name:mkfa23c430f2b7dcd487a5e2857f91215a782034 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0509 09:01:53.233384  277284 start.go:356] acquired machines lock for "calico-20220509085554-6723" in 131.653µs
	I0509 09:01:53.233418  277284 start.go:91] Provisioning new machine with config: &{Name:calico-20220509085554-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:calico-20220509085554-6723 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0509 09:01:53.233508  277284 start.go:131] createHost starting for "" (driver="docker")
	I0509 09:01:53.236189  277284 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0509 09:01:53.236468  277284 start.go:165] libmachine.API.Create for "calico-20220509085554-6723" (driver="docker")
	I0509 09:01:53.236505  277284 client.go:168] LocalClient.Create starting
	I0509 09:01:53.236575  277284 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem
	I0509 09:01:53.236669  277284 main.go:134] libmachine: Decoding PEM data...
	I0509 09:01:53.236691  277284 main.go:134] libmachine: Parsing certificate...
	I0509 09:01:53.236756  277284 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem
	I0509 09:01:53.236781  277284 main.go:134] libmachine: Decoding PEM data...
	I0509 09:01:53.236793  277284 main.go:134] libmachine: Parsing certificate...
	I0509 09:01:53.237121  277284 cli_runner.go:164] Run: docker network inspect calico-20220509085554-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0509 09:01:53.272898  277284 cli_runner.go:211] docker network inspect calico-20220509085554-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0509 09:01:53.272971  277284 network_create.go:272] running [docker network inspect calico-20220509085554-6723] to gather additional debugging logs...
	I0509 09:01:53.272992  277284 cli_runner.go:164] Run: docker network inspect calico-20220509085554-6723
	W0509 09:01:53.308059  277284 cli_runner.go:211] docker network inspect calico-20220509085554-6723 returned with exit code 1
	I0509 09:01:53.308098  277284 network_create.go:275] error running [docker network inspect calico-20220509085554-6723]: docker network inspect calico-20220509085554-6723: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220509085554-6723
	I0509 09:01:53.308125  277284 network_create.go:277] output of [docker network inspect calico-20220509085554-6723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220509085554-6723
	
	** /stderr **
	I0509 09:01:53.308195  277284 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0509 09:01:53.344906  277284 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010310] misses:0}
	I0509 09:01:53.344969  277284 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0509 09:01:53.344992  277284 network_create.go:115] attempt to create docker network calico-20220509085554-6723 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0509 09:01:53.345049  277284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220509085554-6723
	I0509 09:01:53.422173  277284 network_create.go:99] docker network calico-20220509085554-6723 192.168.49.0/24 created
	I0509 09:01:53.422216  277284 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20220509085554-6723" container
	I0509 09:01:53.422287  277284 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0509 09:01:53.460051  277284 cli_runner.go:164] Run: docker volume create calico-20220509085554-6723 --label name.minikube.sigs.k8s.io=calico-20220509085554-6723 --label created_by.minikube.sigs.k8s.io=true
	I0509 09:01:53.498045  277284 oci.go:103] Successfully created a docker volume calico-20220509085554-6723
	I0509 09:01:53.498131  277284 cli_runner.go:164] Run: docker run --rm --name calico-20220509085554-6723-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220509085554-6723 --entrypoint /usr/bin/test -v calico-20220509085554-6723:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0509 09:01:54.175477  277284 oci.go:107] Successfully prepared a docker volume calico-20220509085554-6723
	I0509 09:01:54.175533  277284 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 09:01:54.175566  277284 kic.go:179] Starting extracting preloaded images to volume ...
	I0509 09:01:54.175638  277284 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220509085554-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0509 09:01:59.750483  277284 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220509085554-6723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (5.574765858s)
	I0509 09:01:59.750518  277284 kic.go:188] duration metric: took 5.574948 seconds to extract preloaded images to volume
	W0509 09:01:59.750563  277284 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0509 09:01:59.750576  277284 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0509 09:01:59.750640  277284 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0509 09:01:59.886415  277284 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220509085554-6723 --name calico-20220509085554-6723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220509085554-6723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220509085554-6723 --network calico-20220509085554-6723 --ip 192.168.49.2 --volume calico-20220509085554-6723:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0509 09:02:00.370509  277284 cli_runner.go:164] Run: docker container inspect calico-20220509085554-6723 --format={{.State.Running}}
	I0509 09:02:00.409574  277284 cli_runner.go:164] Run: docker container inspect calico-20220509085554-6723 --format={{.State.Status}}
	I0509 09:02:00.448120  277284 cli_runner.go:164] Run: docker exec calico-20220509085554-6723 stat /var/lib/dpkg/alternatives/iptables
	I0509 09:02:00.516210  277284 oci.go:279] the created container "calico-20220509085554-6723" has a running status.
	I0509 09:02:00.516265  277284 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/calico-20220509085554-6723/id_rsa...
	I0509 09:02:00.601943  277284 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/calico-20220509085554-6723/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0509 09:02:00.704112  277284 cli_runner.go:164] Run: docker container inspect calico-20220509085554-6723 --format={{.State.Status}}
	I0509 09:02:00.748810  277284 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0509 09:02:00.748833  277284 kic_runner.go:114] Args: [docker exec --privileged calico-20220509085554-6723 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0509 09:02:00.836152  277284 cli_runner.go:164] Run: docker container inspect calico-20220509085554-6723 --format={{.State.Status}}
	I0509 09:02:00.875672  277284 machine.go:88] provisioning docker machine ...
	I0509 09:02:00.875716  277284 ubuntu.go:169] provisioning hostname "calico-20220509085554-6723"
	I0509 09:02:00.875805  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:00.914121  277284 main.go:134] libmachine: Using SSH client type: native
	I0509 09:02:00.914320  277284 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49404 <nil> <nil>}
	I0509 09:02:00.914341  277284 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220509085554-6723 && echo "calico-20220509085554-6723" | sudo tee /etc/hostname
	I0509 09:02:01.059659  277284 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220509085554-6723
	
	I0509 09:02:01.059743  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:01.098162  277284 main.go:134] libmachine: Using SSH client type: native
	I0509 09:02:01.098325  277284 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49404 <nil> <nil>}
	I0509 09:02:01.098345  277284 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220509085554-6723' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220509085554-6723/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220509085554-6723' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0509 09:02:01.225279  277284 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0509 09:02:01.225319  277284 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube}
	I0509 09:02:01.225344  277284 ubuntu.go:177] setting up certificates
	I0509 09:02:01.225363  277284 provision.go:83] configureAuth start
	I0509 09:02:01.225426  277284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220509085554-6723
	I0509 09:02:01.261378  277284 provision.go:138] copyHostCerts
	I0509 09:02:01.261447  277284 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem, removing ...
	I0509 09:02:01.261456  277284 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem
	I0509 09:02:01.261543  277284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.pem (1078 bytes)
	I0509 09:02:01.261702  277284 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem, removing ...
	I0509 09:02:01.261724  277284 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem
	I0509 09:02:01.261762  277284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cert.pem (1123 bytes)
	I0509 09:02:01.261852  277284 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem, removing ...
	I0509 09:02:01.261865  277284 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem
	I0509 09:02:01.261904  277284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/key.pem (1679 bytes)
	I0509 09:02:01.261982  277284 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem org=jenkins.calico-20220509085554-6723 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220509085554-6723]
	I0509 09:02:01.312552  277284 provision.go:172] copyRemoteCerts
	I0509 09:02:01.312635  277284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0509 09:02:01.312677  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:01.350376  277284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/calico-20220509085554-6723/id_rsa Username:docker}
	I0509 09:02:01.445151  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0509 09:02:01.467084  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0509 09:02:01.489741  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0509 09:02:01.511796  277284 provision.go:86] duration metric: configureAuth took 286.413435ms
	I0509 09:02:01.511832  277284 ubuntu.go:193] setting minikube options for container-runtime
	I0509 09:02:01.512043  277284 config.go:178] Loaded profile config "calico-20220509085554-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 09:02:01.512104  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:01.548146  277284 main.go:134] libmachine: Using SSH client type: native
	I0509 09:02:01.548321  277284 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49404 <nil> <nil>}
	I0509 09:02:01.548343  277284 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0509 09:02:01.669003  277284 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0509 09:02:01.669032  277284 ubuntu.go:71] root file system type: overlay
	I0509 09:02:01.669236  277284 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0509 09:02:01.669306  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:01.704793  277284 main.go:134] libmachine: Using SSH client type: native
	I0509 09:02:01.704973  277284 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49404 <nil> <nil>}
	I0509 09:02:01.705043  277284 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0509 09:02:01.839715  277284 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0509 09:02:01.839798  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:01.876136  277284 main.go:134] libmachine: Using SSH client type: native
	I0509 09:02:01.876332  277284 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d8f60] 0x7dbfc0 <nil>  [] 0s} 127.0.0.1 49404 <nil> <nil>}
	I0509 09:02:01.876361  277284 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0509 09:02:02.579131  277284 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-09 09:02:01.834557817 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0509 09:02:02.579180  277284 machine.go:91] provisioned docker machine in 1.703478001s
	I0509 09:02:02.579193  277284 client.go:171] LocalClient.Create took 9.342678751s
	I0509 09:02:02.579215  277284 start.go:173] duration metric: libmachine.API.Create for "calico-20220509085554-6723" took 9.342748221s
	I0509 09:02:02.579228  277284 start.go:306] post-start starting for "calico-20220509085554-6723" (driver="docker")
	I0509 09:02:02.579234  277284 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0509 09:02:02.579371  277284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0509 09:02:02.579421  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:02.615290  277284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/calico-20220509085554-6723/id_rsa Username:docker}
	I0509 09:02:02.709201  277284 ssh_runner.go:195] Run: cat /etc/os-release
	I0509 09:02:02.712273  277284 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0509 09:02:02.712300  277284 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0509 09:02:02.712309  277284 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0509 09:02:02.712315  277284 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0509 09:02:02.712325  277284 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/addons for local assets ...
	I0509 09:02:02.712379  277284 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files for local assets ...
	I0509 09:02:02.712455  277284 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem -> 67232.pem in /etc/ssl/certs
	I0509 09:02:02.712534  277284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0509 09:02:02.720124  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem --> /etc/ssl/certs/67232.pem (1708 bytes)
	I0509 09:02:02.740805  277284 start.go:309] post-start completed in 161.561788ms
	I0509 09:02:02.741186  277284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220509085554-6723
	I0509 09:02:02.774991  277284 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/config.json ...
	I0509 09:02:02.775266  277284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 09:02:02.775308  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:02.810736  277284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/calico-20220509085554-6723/id_rsa Username:docker}
	I0509 09:02:02.897506  277284 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0509 09:02:02.901784  277284 start.go:134] duration metric: createHost completed in 9.668266062s
	I0509 09:02:02.901811  277284 start.go:81] releasing machines lock for "calico-20220509085554-6723", held for 9.668414225s
	I0509 09:02:02.901904  277284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220509085554-6723
	I0509 09:02:02.936658  277284 ssh_runner.go:195] Run: systemctl --version
	I0509 09:02:02.936684  277284 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0509 09:02:02.936715  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:02.936763  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:02.972776  277284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/calico-20220509085554-6723/id_rsa Username:docker}
	I0509 09:02:02.973157  277284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/calico-20220509085554-6723/id_rsa Username:docker}
	I0509 09:02:03.061152  277284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0509 09:02:03.081291  277284 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (199 bytes)
	I0509 09:02:03.095547  277284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0509 09:02:03.108914  277284 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0509 09:02:03.118787  277284 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0509 09:02:03.118854  277284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0509 09:02:03.128871  277284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0509 09:02:03.142968  277284 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0509 09:02:03.228414  277284 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0509 09:02:03.311920  277284 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0509 09:02:03.322756  277284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0509 09:02:03.399886  277284 ssh_runner.go:195] Run: sudo systemctl start docker
	I0509 09:02:03.410646  277284 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0509 09:02:03.487355  277284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0509 09:02:03.567830  277284 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0509 09:02:03.582109  277284 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0509 09:02:03.582178  277284 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0509 09:02:03.585750  277284 start.go:468] Will wait 60s for crictl version
	I0509 09:02:03.585812  277284 ssh_runner.go:195] Run: sudo crictl version
	I0509 09:02:03.699223  277284 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.13
	RuntimeApiVersion:  1.41.0
	I0509 09:02:03.699295  277284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0509 09:02:03.741614  277284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0509 09:02:03.787221  277284 out.go:204] * Preparing Kubernetes v1.24.0 on Docker 20.10.13 ...
	I0509 09:02:03.787333  277284 cli_runner.go:164] Run: docker network inspect calico-20220509085554-6723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0509 09:02:03.824245  277284 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0509 09:02:03.828332  277284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0509 09:02:03.839304  277284 preload.go:132] Checking if preload exists for k8s version v1.24.0 and runtime docker
	I0509 09:02:03.839372  277284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0509 09:02:03.877354  277284 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.0
	k8s.gcr.io/kube-proxy:v1.24.0
	k8s.gcr.io/kube-controller-manager:v1.24.0
	k8s.gcr.io/kube-scheduler:v1.24.0
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0509 09:02:03.877382  277284 docker.go:541] Images already preloaded, skipping extraction
	I0509 09:02:03.877452  277284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0509 09:02:03.915794  277284 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.0
	k8s.gcr.io/kube-proxy:v1.24.0
	k8s.gcr.io/kube-controller-manager:v1.24.0
	k8s.gcr.io/kube-scheduler:v1.24.0
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0509 09:02:03.915826  277284 cache_images.go:84] Images are preloaded, skipping loading
	I0509 09:02:03.915900  277284 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0509 09:02:04.015651  277284 cni.go:95] Creating CNI manager for "calico"
	I0509 09:02:04.015676  277284 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0509 09:02:04.015692  277284 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.24.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220509085554-6723 NodeName:calico-20220509085554-6723 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0509 09:02:04.015820  277284 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "calico-20220509085554-6723"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0509 09:02:04.015916  277284 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-20220509085554-6723 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.0 ClusterName:calico-20220509085554-6723 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0509 09:02:04.015977  277284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.0
	I0509 09:02:04.024160  277284 binaries.go:44] Found k8s binaries, skipping transfer
	I0509 09:02:04.024225  277284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0509 09:02:04.032089  277284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
	I0509 09:02:04.046005  277284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0509 09:02:04.061506  277284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0509 09:02:04.075610  277284 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0509 09:02:04.079152  277284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0509 09:02:04.090091  277284 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723 for IP: 192.168.49.2
	I0509 09:02:04.090242  277284 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.key
	I0509 09:02:04.090301  277284 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/proxy-client-ca.key
	I0509 09:02:04.090361  277284 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/client.key
	I0509 09:02:04.090377  277284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/client.crt with IP's: []
	I0509 09:02:04.167276  277284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/client.crt ...
	I0509 09:02:04.167314  277284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/client.crt: {Name:mk19c04a0211216927ae4b7a7ce5dc75f45cfb48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 09:02:04.167569  277284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/client.key ...
	I0509 09:02:04.167590  277284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/client.key: {Name:mk929646f5f961e8a51731543916e71e53578ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 09:02:04.167733  277284 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.key.dd3b5fb2
	I0509 09:02:04.167753  277284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0509 09:02:04.377838  277284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.crt.dd3b5fb2 ...
	I0509 09:02:04.377878  277284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.crt.dd3b5fb2: {Name:mk5a9edea4e8c057524542cf83c9abcc1b509133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 09:02:04.378088  277284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.key.dd3b5fb2 ...
	I0509 09:02:04.378102  277284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.key.dd3b5fb2: {Name:mke136eaaab622b83b9f5dcc440811d41c921bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 09:02:04.378193  277284 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.crt
	I0509 09:02:04.378259  277284 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.key
	I0509 09:02:04.378303  277284 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/proxy-client.key
	I0509 09:02:04.378314  277284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/proxy-client.crt with IP's: []
	I0509 09:02:04.428957  277284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/proxy-client.crt ...
	I0509 09:02:04.428988  277284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/proxy-client.crt: {Name:mk8cc70154b6f93313bbc17360ce827714e6d6e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 09:02:04.429186  277284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/proxy-client.key ...
	I0509 09:02:04.429200  277284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/proxy-client.key: {Name:mk7163fb62adc6dbd6286216f15399d10da4537b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 09:02:04.429382  277284 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/6723.pem (1338 bytes)
	W0509 09:02:04.429425  277284 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/6723_empty.pem, impossibly tiny 0 bytes
	I0509 09:02:04.429434  277284 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca-key.pem (1679 bytes)
	I0509 09:02:04.429468  277284 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/ca.pem (1078 bytes)
	I0509 09:02:04.429503  277284 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/cert.pem (1123 bytes)
	I0509 09:02:04.429531  277284 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/key.pem (1679 bytes)
	I0509 09:02:04.429569  277284 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem (1708 bytes)
	I0509 09:02:04.430099  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0509 09:02:04.450417  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0509 09:02:04.470325  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0509 09:02:04.491874  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/calico-20220509085554-6723/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0509 09:02:04.515388  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0509 09:02:04.538292  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0509 09:02:04.557770  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0509 09:02:04.577432  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0509 09:02:04.596962  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0509 09:02:04.616510  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/certs/6723.pem --> /usr/share/ca-certificates/6723.pem (1338 bytes)
	I0509 09:02:04.636585  277284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/ssl/certs/67232.pem --> /usr/share/ca-certificates/67232.pem (1708 bytes)
	I0509 09:02:04.657905  277284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0509 09:02:04.671992  277284 ssh_runner.go:195] Run: openssl version
	I0509 09:02:04.677603  277284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0509 09:02:04.685907  277284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0509 09:02:04.689351  277284 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May  9 08:25 /usr/share/ca-certificates/minikubeCA.pem
	I0509 09:02:04.689406  277284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0509 09:02:04.694834  277284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0509 09:02:04.704362  277284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6723.pem && ln -fs /usr/share/ca-certificates/6723.pem /etc/ssl/certs/6723.pem"
	I0509 09:02:04.712737  277284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6723.pem
	I0509 09:02:04.716170  277284 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May  9 08:33 /usr/share/ca-certificates/6723.pem
	I0509 09:02:04.716233  277284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6723.pem
	I0509 09:02:04.721668  277284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6723.pem /etc/ssl/certs/51391683.0"
	I0509 09:02:04.730407  277284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67232.pem && ln -fs /usr/share/ca-certificates/67232.pem /etc/ssl/certs/67232.pem"
	I0509 09:02:04.738767  277284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67232.pem
	I0509 09:02:04.742381  277284 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May  9 08:33 /usr/share/ca-certificates/67232.pem
	I0509 09:02:04.742436  277284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67232.pem
	I0509 09:02:04.747371  277284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67232.pem /etc/ssl/certs/3ec20f2e.0"
	I0509 09:02:04.755027  277284 kubeadm.go:391] StartCluster: {Name:calico-20220509085554-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:calico-20220509085554-6723 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false}
	I0509 09:02:04.755138  277284 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0509 09:02:04.790503  277284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0509 09:02:04.798489  277284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0509 09:02:04.806696  277284 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0509 09:02:04.806759  277284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0509 09:02:04.816292  277284 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0509 09:02:04.816340  277284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0509 09:02:19.272101  277284 out.go:204]   - Generating certificates and keys ...
	I0509 09:02:19.275653  277284 out.go:204]   - Booting up control plane ...
	I0509 09:02:19.278910  277284 out.go:204]   - Configuring RBAC rules ...
	I0509 09:02:19.281409  277284 cni.go:95] Creating CNI manager for "calico"
	I0509 09:02:19.283438  277284 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0509 09:02:19.285438  277284 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.0/kubectl ...
	I0509 09:02:19.285471  277284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0509 09:02:19.365449  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0509 09:02:20.893938  277284 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.528451889s)
	I0509 09:02:20.893989  277284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0509 09:02:20.894076  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:20.894088  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=3bac68e23e7013f03af5baca398608c8c8001fab minikube.k8s.io/name=calico-20220509085554-6723 minikube.k8s.io/updated_at=2022_05_09T09_02_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:20.998370  277284 ops.go:34] apiserver oom_adj: -16
	I0509 09:02:20.998378  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:21.606487  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:22.106926  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:22.606518  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:23.106774  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:23.607138  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:24.106606  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:24.606864  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:25.106435  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:25.606900  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:26.106695  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:26.607056  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:27.106903  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:27.606424  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:28.106449  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:28.607092  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:29.106304  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:29.606682  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:30.107028  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:30.607253  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:31.107113  277284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0509 09:02:31.461843  277284 kubeadm.go:1020] duration metric: took 10.56781553s to wait for elevateKubeSystemPrivileges.
	I0509 09:02:31.461877  277284 kubeadm.go:393] StartCluster complete in 26.706860151s
	I0509 09:02:31.461902  277284 settings.go:142] acquiring lock: {Name:mk0059ab96b71199ca0a558b9bc695696bca2ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 09:02:31.462012  277284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 09:02:31.463361  277284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig: {Name:mk1330d0f99a2286cbe8cc1ffbe430ce56d1dfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0509 09:02:32.727541  277284 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220509085554-6723" rescaled to 1
	I0509 09:02:32.727609  277284 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0509 09:02:32.727868  277284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0509 09:02:32.900541  277284 out.go:177] * Verifying Kubernetes components...
	I0509 09:02:32.727891  277284 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0509 09:02:32.727995  277284 config.go:178] Loaded profile config "calico-20220509085554-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 09:02:32.913850  277284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0509 09:02:32.913999  277284 addons.go:65] Setting storage-provisioner=true in profile "calico-20220509085554-6723"
	I0509 09:02:32.917172  277284 addons.go:153] Setting addon storage-provisioner=true in "calico-20220509085554-6723"
	W0509 09:02:32.917190  277284 addons.go:165] addon storage-provisioner should already be in state true
	I0509 09:02:32.917240  277284 host.go:66] Checking if "calico-20220509085554-6723" exists ...
	I0509 09:02:32.917831  277284 cli_runner.go:164] Run: docker container inspect calico-20220509085554-6723 --format={{.State.Status}}
	I0509 09:02:32.913999  277284 addons.go:65] Setting default-storageclass=true in profile "calico-20220509085554-6723"
	I0509 09:02:32.917987  277284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220509085554-6723"
	I0509 09:02:32.918375  277284 cli_runner.go:164] Run: docker container inspect calico-20220509085554-6723 --format={{.State.Status}}
	I0509 09:02:33.002864  277284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0509 09:02:33.013914  277284 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0509 09:02:33.013943  277284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0509 09:02:33.014001  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:33.011881  277284 addons.go:153] Setting addon default-storageclass=true in "calico-20220509085554-6723"
	W0509 09:02:33.014282  277284 addons.go:165] addon default-storageclass should already be in state true
	I0509 09:02:33.014322  277284 host.go:66] Checking if "calico-20220509085554-6723" exists ...
	I0509 09:02:33.014902  277284 cli_runner.go:164] Run: docker container inspect calico-20220509085554-6723 --format={{.State.Status}}
	I0509 09:02:33.057413  277284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/calico-20220509085554-6723/id_rsa Username:docker}
	I0509 09:02:33.064832  277284 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0509 09:02:33.064860  277284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0509 09:02:33.064916  277284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220509085554-6723
	I0509 09:02:33.103748  277284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49404 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/calico-20220509085554-6723/id_rsa Username:docker}
	I0509 09:02:33.136953  277284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0509 09:02:33.137904  277284 node_ready.go:35] waiting up to 5m0s for node "calico-20220509085554-6723" to be "Ready" ...
	I0509 09:02:33.165428  277284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0509 09:02:33.262690  277284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0509 09:02:33.463527  277284 node_ready.go:49] node "calico-20220509085554-6723" has status "Ready":"True"
	I0509 09:02:33.463562  277284 node_ready.go:38] duration metric: took 325.63352ms waiting for node "calico-20220509085554-6723" to be "Ready" ...
	I0509 09:02:33.463574  277284 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0509 09:02:33.477511  277284 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace to be "Ready" ...
	I0509 09:02:34.675646  277284 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.538643382s)
	I0509 09:02:34.675693  277284 start.go:783] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0509 09:02:34.898533  277284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.73305809s)
	I0509 09:02:34.898680  277284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.635935974s)
	I0509 09:02:34.901328  277284 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0509 09:02:34.903459  277284 addons.go:417] enableAddons completed in 2.175547696s
	I0509 09:02:35.569380  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:02:38.024312  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:02:40.027691  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:02:42.525543  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:02:44.526030  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:02:47.026701  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:02:49.525925  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:02:51.565124  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:02:54.025077  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:02:56.065612  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:02:58.527336  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:01.066401  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:03.525488  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:06.025452  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:08.525347  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:10.561891  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:12.566876  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:15.061950  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:17.562035  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:19.566271  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:22.025825  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:24.526240  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:27.065111  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:29.067006  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:31.565736  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:34.025416  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:36.564986  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:39.025247  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:41.025332  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:43.524655  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:45.526340  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:47.561085  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:49.563467  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:52.024953  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:54.061955  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:56.525682  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:03:58.562428  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:00.563591  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:03.025614  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:05.065688  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:07.525882  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:09.561851  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:12.066627  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:14.523866  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:16.527518  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:19.067074  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:21.525055  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:24.028340  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:26.525639  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:29.065431  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:31.524131  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:33.525616  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:36.062526  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:38.525219  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:40.564593  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:42.565446  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:45.025618  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:47.064907  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:49.561698  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:51.565489  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:54.025763  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:56.525182  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:04:58.532069  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:01.024198  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:03.025957  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:05.065806  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:07.566834  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:10.066166  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:12.066525  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:14.564288  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:16.565756  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:19.066736  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:21.564920  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:24.024815  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:26.524681  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:28.524923  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:30.525098  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:32.566827  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:35.023872  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:37.024143  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:39.024839  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:41.563316  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:44.024980  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:46.524531  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:48.564853  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:50.564951  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:52.565630  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:54.566522  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:57.065101  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:05:59.065614  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:01.524718  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:04.023983  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:06.065118  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:08.525567  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:11.024985  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:13.525130  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:15.563549  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:17.566922  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:20.024334  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:22.025276  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:24.062174  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:26.524392  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:29.065170  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:31.524898  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:33.565012  277284 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:34.030547  277284 pod_ready.go:81] duration metric: took 4m0.552942425s waiting for pod "calico-kube-controllers-c44b4545-d4dnd" in "kube-system" namespace to be "Ready" ...
	E0509 09:06:34.030571  277284 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0509 09:06:34.030580  277284 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-2j9sg" in "kube-system" namespace to be "Ready" ...
	I0509 09:06:36.077674  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:38.077748  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:40.577819  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:43.077780  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:45.576972  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:48.077071  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:50.077738  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:52.577510  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:55.076581  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:57.078010  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:06:59.576559  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:02.077060  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:04.577164  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:06.577294  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:09.077013  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:11.077316  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:13.077784  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:15.576847  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:17.577029  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:19.577608  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:21.578084  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:24.077620  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:26.577604  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:29.076939  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:31.076985  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:33.077041  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:35.575853  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:37.576475  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:39.577600  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:41.579176  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:44.077276  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:46.078572  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:48.577551  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:51.076672  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:53.577000  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:55.577616  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:57.577670  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:07:59.578360  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:02.077375  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:04.579100  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:07.076562  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:09.077352  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:11.077675  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:13.077894  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:15.576765  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:17.582673  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:20.078134  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:22.577542  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:25.077490  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:27.577708  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:30.077138  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:32.577098  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:35.076853  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:37.076885  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:39.077925  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:41.577284  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:43.578355  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:46.076594  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:48.077109  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:50.077663  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:52.578175  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:55.077192  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:57.077256  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:08:59.077457  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:01.578253  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:04.076741  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:06.077543  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:08.576116  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:10.576498  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:12.577635  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:15.078007  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:17.577428  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:19.577590  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:22.077235  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:24.579875  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:27.078755  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:29.577132  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:31.577404  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:33.577508  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:36.076191  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:38.076838  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:40.077490  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:42.078490  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:44.576825  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:46.577254  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:49.082201  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:51.578924  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:54.077546  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:56.577642  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:09:59.076735  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:01.077058  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:03.576585  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:05.578092  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:08.076725  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:10.076963  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:12.078130  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:14.576554  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:16.577184  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:19.076782  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:21.577127  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:24.077108  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:26.577276  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:29.077367  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:31.576584  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:34.076851  277284 pod_ready.go:102] pod "calico-node-2j9sg" in "kube-system" namespace has status "Ready":"False"
	I0509 09:10:34.083617  277284 pod_ready.go:81] duration metric: took 4m0.053017866s waiting for pod "calico-node-2j9sg" in "kube-system" namespace to be "Ready" ...
	E0509 09:10:34.083657  277284 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0509 09:10:34.083678  277284 pod_ready.go:38] duration metric: took 8m0.620091474s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0509 09:10:34.086685  277284 out.go:177] 
	W0509 09:10:34.088456  277284 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0509 09:10:34.088488  277284 out.go:239] * 
	* 
	W0509 09:10:34.089746  277284 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0509 09:10:34.091934  277284 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (521.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (366.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.182059116s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0509 09:06:34.825819    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
E0509 09:06:41.376186    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/bridge-20220509085553-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154624663s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
E0509 09:06:58.023524    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kubenet-20220509085553-6723/client.crt: no such file or directory
E0509 09:07:06.756304    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:07:06.761615    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:07:06.771941    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:07:06.792242    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:07:06.832672    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:07:06.912973    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:07:07.073241    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:07:07.393815    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:07:08.034728    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.170428988s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0509 09:07:09.315544    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
E0509 09:07:11.876387    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:07:16.997451    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:07:20.376470    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/enable-default-cni-20220509085553-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146386131s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0509 09:07:27.237834    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
E0509 09:07:36.590060    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158145321s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0509 09:07:47.718418    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
E0509 09:08:03.297175    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/bridge-20220509085553-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.17050652s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0509 09:08:10.539306    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
E0509 09:08:19.943832    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kubenet-20220509085553-6723/client.crt: no such file or directory
E0509 09:08:22.252129    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:22.257477    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:22.267806    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:22.288136    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:22.328454    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:22.408862    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:22.569308    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:22.890092    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:23.531075    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:24.811256    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:27.371990    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:28.541308    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:08:28.679135    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.171726698s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0509 09:08:32.492354    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
E0509 09:08:42.733069    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141375994s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0509 09:08:56.225077    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:09:03.213870    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.160440465s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
E0509 09:09:44.174226    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14843169s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0509 09:10:04.217368    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/enable-default-cni-20220509085553-6723/client.crt: no such file or directory
E0509 09:10:19.454445    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/bridge-20220509085553-6723/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.171055983s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0509 09:11:03.784396    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kubenet-20220509085553-6723/client.crt: no such file or directory
E0509 09:11:06.094493    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/custom-weave-20220509085554-6723/client.crt: no such file or directory
E0509 09:11:34.825683    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 09:12:06.756369    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.152951076s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/false/DNS (366.93s)

                                                
                                    

Test pass (226/288)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.52
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.24.0/json-events 4.32
11 TestDownloadOnly/v1.24.0/preload-exists 0
15 TestDownloadOnly/v1.24.0/LogsDuration 0.07
17 TestDownloadOnly/v1.24.1-rc.0/json-events 3.06
20 TestDownloadOnly/v1.24.1-rc.0/binaries 0
22 TestDownloadOnly/v1.24.1-rc.0/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.35
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.21
25 TestDownloadOnlyKic 2.48
26 TestBinaryMirror 0.92
27 TestOffline 65.26
29 TestAddons/Setup 100.76
31 TestAddons/parallel/Registry 16.79
32 TestAddons/parallel/Ingress 304.37
34 TestAddons/parallel/HelmTiller 10.86
36 TestAddons/parallel/CSI 43.25
38 TestAddons/serial/GCPAuth 38.03
39 TestAddons/StoppedEnableDisable 10.98
40 TestCertOptions 34.08
41 TestCertExpiration 215.75
42 TestDockerFlags 32.41
45 TestKVMDriverInstallOrUpdate 1.66
49 TestErrorSpam/setup 28.52
50 TestErrorSpam/start 1.02
51 TestErrorSpam/status 1.2
52 TestErrorSpam/pause 1.49
53 TestErrorSpam/unpause 1.57
54 TestErrorSpam/stop 11.03
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 44.27
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 5.65
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.19
65 TestFunctional/serial/CacheCmd/cache/add_remote 2.24
66 TestFunctional/serial/CacheCmd/cache/add_local 0.91
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
68 TestFunctional/serial/CacheCmd/cache/list 0.06
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.39
70 TestFunctional/serial/CacheCmd/cache/cache_reload 1.91
71 TestFunctional/serial/CacheCmd/cache/delete 0.13
72 TestFunctional/serial/MinikubeKubectlCmd 0.12
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
74 TestFunctional/serial/ExtraConfig 29.55
75 TestFunctional/serial/ComponentHealth 0.06
76 TestFunctional/serial/LogsCmd 1.45
77 TestFunctional/serial/LogsFileCmd 1.49
79 TestFunctional/parallel/ConfigCmd 0.48
80 TestFunctional/parallel/DashboardCmd 13.7
81 TestFunctional/parallel/DryRun 0.69
82 TestFunctional/parallel/InternationalLanguage 0.27
83 TestFunctional/parallel/StatusCmd 1.91
86 TestFunctional/parallel/ServiceCmd 10.21
87 TestFunctional/parallel/ServiceCmdConnect 12.64
88 TestFunctional/parallel/AddonsCmd 0.19
89 TestFunctional/parallel/PersistentVolumeClaim 46.26
91 TestFunctional/parallel/SSHCmd 0.86
92 TestFunctional/parallel/CpCmd 1.89
93 TestFunctional/parallel/MySQL 25.18
94 TestFunctional/parallel/FileSync 0.46
95 TestFunctional/parallel/CertSync 2.75
99 TestFunctional/parallel/NodeLabels 0.08
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
103 TestFunctional/parallel/DockerEnv/bash 1.67
104 TestFunctional/parallel/Version/short 0.07
105 TestFunctional/parallel/Version/components 1.93
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.39
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.52
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.38
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.4
110 TestFunctional/parallel/ImageCommands/ImageBuild 5.6
111 TestFunctional/parallel/ImageCommands/Setup 1.02
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.63
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
117 TestFunctional/parallel/ProfileCmd/profile_list 0.57
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.84
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.33
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.66
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.98
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.29
131 TestFunctional/parallel/MountCmd/any-port 13.45
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.84
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
136 TestFunctional/parallel/MountCmd/specific-port 2.88
137 TestFunctional/delete_addon-resizer_images 0.11
138 TestFunctional/delete_my-image_image 0.03
139 TestFunctional/delete_minikube_cached_images 0.03
142 TestIngressAddonLegacy/StartLegacyK8sCluster 93.63
144 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.27
145 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
146 TestIngressAddonLegacy/serial/ValidateIngressAddons 42.31
149 TestJSONOutput/start/Command 43.31
150 TestJSONOutput/start/Audit 0
152 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/pause/Command 0.66
156 TestJSONOutput/pause/Audit 0
158 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/unpause/Command 0.65
162 TestJSONOutput/unpause/Audit 0
164 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/stop/Command 11.02
168 TestJSONOutput/stop/Audit 0
170 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
172 TestErrorJSONOutput 0.31
174 TestKicCustomNetwork/create_custom_network 32.29
175 TestKicCustomNetwork/use_default_bridge_network 31.92
176 TestKicExistingNetwork 30.67
177 TestKicCustomSubnet 30.31
178 TestMainNoArgs 0.06
181 TestMountStart/serial/StartWithMountFirst 5.99
182 TestMountStart/serial/VerifyMountFirst 0.36
183 TestMountStart/serial/StartWithMountSecond 5.7
184 TestMountStart/serial/VerifyMountSecond 0.36
185 TestMountStart/serial/DeleteFirst 1.81
186 TestMountStart/serial/VerifyMountPostDelete 0.36
187 TestMountStart/serial/Stop 1.28
188 TestMountStart/serial/RestartStopped 6.91
189 TestMountStart/serial/VerifyMountPostStop 0.36
192 TestMultiNode/serial/FreshStart2Nodes 98.78
193 TestMultiNode/serial/DeployApp2Nodes 4.04
194 TestMultiNode/serial/PingHostFrom2Pods 0.9
195 TestMultiNode/serial/AddNode 47.68
196 TestMultiNode/serial/ProfileList 0.39
197 TestMultiNode/serial/CopyFile 12.82
198 TestMultiNode/serial/StopNode 2.59
199 TestMultiNode/serial/StartAfterStop 20.83
200 TestMultiNode/serial/RestartKeepsNodes 105.1
201 TestMultiNode/serial/DeleteNode 5.35
202 TestMultiNode/serial/StopMultiNode 21.86
203 TestMultiNode/serial/RestartMultiNode 60.61
204 TestMultiNode/serial/ValidateNameConflict 31.55
209 TestPreload 116.85
211 TestScheduledStopUnix 101.63
212 TestSkaffold 57.94
214 TestInsufficientStorage 13.38
215 TestRunningBinaryUpgrade 87.02
218 TestMissingContainerUpgrade 119.46
220 TestStoppedBinaryUpgrade/Setup 0.5
221 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
229 TestNoKubernetes/serial/StartWithK8s 49.18
230 TestStoppedBinaryUpgrade/Upgrade 84.36
231 TestNoKubernetes/serial/StartWithStopK8s 15.95
232 TestNoKubernetes/serial/Start 7.62
233 TestNoKubernetes/serial/VerifyK8sNotRunning 0.49
234 TestNoKubernetes/serial/ProfileList 1.28
235 TestNoKubernetes/serial/Stop 1.32
236 TestNoKubernetes/serial/StartNoArgs 6.2
237 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
238 TestStoppedBinaryUpgrade/MinikubeLogs 1.59
251 TestPause/serial/Start 52.15
253 TestStartStop/group/old-k8s-version/serial/FirstStart 122.36
254 TestPause/serial/SecondStartNoReconfiguration 5.86
255 TestPause/serial/Pause 0.74
256 TestPause/serial/VerifyStatus 0.52
257 TestPause/serial/Unpause 0.75
258 TestPause/serial/PauseAgain 0.93
259 TestPause/serial/DeletePaused 2.85
260 TestPause/serial/VerifyDeletedResources 3.13
262 TestStartStop/group/no-preload/serial/FirstStart 0.06
273 TestStartStop/group/embed-certs/serial/FirstStart 0.08
280 TestStartStop/group/default-k8s-different-port/serial/FirstStart 0.09
295 TestStartStop/group/newest-cni/serial/FirstStart 0.07
296 TestStartStop/group/newest-cni/serial/DeployApp 0
301 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
302 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
306 TestNetworkPlugins/group/kindnet/Start 55.6
307 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
309 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
310 TestNetworkPlugins/group/kindnet/DNS 0.15
311 TestNetworkPlugins/group/kindnet/Localhost 0.14
312 TestNetworkPlugins/group/kindnet/HairPin 0.15
313 TestNetworkPlugins/group/enable-default-cni/Start 46.06
314 TestStartStop/group/old-k8s-version/serial/DeployApp 8.51
315 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.71
316 TestStartStop/group/old-k8s-version/serial/Stop 10.93
317 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
318 TestStartStop/group/old-k8s-version/serial/SecondStart 605.49
319 TestNetworkPlugins/group/bridge/Start 48.89
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.79
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.76
322 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
323 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
324 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
325 TestNetworkPlugins/group/kubenet/Start 44.78
326 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
327 TestNetworkPlugins/group/bridge/NetCatPod 11.38
328 TestNetworkPlugins/group/bridge/DNS 0.18
329 TestNetworkPlugins/group/bridge/Localhost 0.17
330 TestNetworkPlugins/group/bridge/HairPin 0.15
331 TestNetworkPlugins/group/cilium/Start 93.09
332 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
333 TestNetworkPlugins/group/kubenet/NetCatPod 13.24
334 TestNetworkPlugins/group/kubenet/DNS 0.16
335 TestNetworkPlugins/group/kubenet/Localhost 0.15
337 TestNetworkPlugins/group/false/Start 297.24
339 TestNetworkPlugins/group/cilium/ControllerPod 5.02
340 TestNetworkPlugins/group/cilium/KubeletFlags 0.43
341 TestNetworkPlugins/group/cilium/NetCatPod 12
342 TestNetworkPlugins/group/cilium/DNS 0.16
343 TestNetworkPlugins/group/cilium/Localhost 0.14
344 TestNetworkPlugins/group/cilium/HairPin 0.15
345 TestNetworkPlugins/group/custom-weave/Start 53.96
346 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.44
347 TestNetworkPlugins/group/custom-weave/NetCatPod 9.29
348 TestNetworkPlugins/group/false/KubeletFlags 0.42
349 TestNetworkPlugins/group/false/NetCatPod 11.44
351 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
352 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
353 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.44
354 TestStartStop/group/old-k8s-version/serial/Pause 3.61
x
+
TestDownloadOnly/v1.16.0/json-events (7.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220509082434-6723 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220509082434-6723 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.519224174s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220509082434-6723
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220509082434-6723: exit status 85 (80.436207ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/09 08:24:34
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0509 08:24:34.670330    6735 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:24:34.670486    6735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:24:34.670501    6735 out.go:309] Setting ErrFile to fd 2...
	I0509 08:24:34.670509    6735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:24:34.670624    6735 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	W0509 08:24:34.670749    6735 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/config/config.json: no such file or directory
	I0509 08:24:34.670954    6735 out.go:303] Setting JSON to true
	I0509 08:24:34.671826    6735 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":429,"bootTime":1652084246,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 08:24:34.671902    6735 start.go:125] virtualization: kvm guest
	I0509 08:24:34.674970    6735 out.go:97] [download-only-20220509082434-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0509 08:24:34.675131    6735 notify.go:193] Checking for updates...
	I0509 08:24:34.676860    6735 out.go:169] MINIKUBE_LOCATION=14070
	W0509 08:24:34.675148    6735 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball: no such file or directory
	I0509 08:24:34.679813    6735 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 08:24:34.681370    6735 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:24:34.683155    6735 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 08:24:34.684791    6735 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0509 08:24:34.687574    6735 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0509 08:24:34.687811    6735 driver.go:346] Setting default libvirt URI to qemu:///system
	I0509 08:24:34.723837    6735 docker.go:137] docker version: linux-20.10.15
	I0509 08:24:34.723936    6735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:24:35.268038    6735 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:33 SystemTime:2022-05-09 08:24:34.751875204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:24:35.268152    6735 docker.go:254] overlay module found
	I0509 08:24:35.270318    6735 out.go:97] Using the docker driver based on user configuration
	I0509 08:24:35.270357    6735 start.go:284] selected driver: docker
	I0509 08:24:35.270366    6735 start.go:801] validating driver "docker" against <nil>
	I0509 08:24:35.270554    6735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:24:35.376025    6735 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:33 SystemTime:2022-05-09 08:24:35.297589294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:24:35.376146    6735 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0509 08:24:35.376652    6735 start_flags.go:373] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0509 08:24:35.376783    6735 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0509 08:24:35.379249    6735 out.go:169] Using Docker driver with the root privilege
	I0509 08:24:35.380785    6735 cni.go:95] Creating CNI manager for ""
	I0509 08:24:35.380818    6735 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0509 08:24:35.380832    6735 start_flags.go:306] config:
	{Name:download-only-20220509082434-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220509082434-6723 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0509 08:24:35.382607    6735 out.go:97] Starting control plane node download-only-20220509082434-6723 in cluster download-only-20220509082434-6723
	I0509 08:24:35.382642    6735 cache.go:120] Beginning downloading kic base image for docker with docker
	I0509 08:24:35.384091    6735 out.go:97] Pulling base image ...
	I0509 08:24:35.384130    6735 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0509 08:24:35.384280    6735 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0509 08:24:35.414727    6735 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0509 08:24:35.414759    6735 cache.go:57] Caching tarball of preloaded images
	I0509 08:24:35.415060    6735 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0509 08:24:35.417407    6735 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0509 08:24:35.417436    6735 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0509 08:24:35.426949    6735 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0509 08:24:35.426975    6735 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0509 08:24:35.427145    6735 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
	I0509 08:24:35.427245    6735 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0509 08:24:35.455774    6735 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220509082434-6723"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.0/json-events (4.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220509082434-6723 --force --alsologtostderr --kubernetes-version=v1.24.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220509082434-6723 --force --alsologtostderr --kubernetes-version=v1.24.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.320602931s)
--- PASS: TestDownloadOnly/v1.24.0/json-events (4.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.0/preload-exists
--- PASS: TestDownloadOnly/v1.24.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220509082434-6723
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220509082434-6723: exit status 85 (74.070761ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/09 08:24:42
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0509 08:24:42.271487    6900 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:24:42.271626    6900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:24:42.271636    6900 out.go:309] Setting ErrFile to fd 2...
	I0509 08:24:42.271643    6900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:24:42.271785    6900 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	W0509 08:24:42.271918    6900 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/config/config.json: no such file or directory
	I0509 08:24:42.272045    6900 out.go:303] Setting JSON to true
	I0509 08:24:42.272832    6900 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":436,"bootTime":1652084246,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 08:24:42.272901    6900 start.go:125] virtualization: kvm guest
	I0509 08:24:42.275764    6900 out.go:97] [download-only-20220509082434-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0509 08:24:42.277888    6900 out.go:169] MINIKUBE_LOCATION=14070
	I0509 08:24:42.275972    6900 notify.go:193] Checking for updates...
	I0509 08:24:42.281484    6900 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 08:24:42.283408    6900 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:24:42.285023    6900 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 08:24:42.286891    6900 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220509082434-6723"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.1-rc.0/json-events (3.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.1-rc.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220509082434-6723 --force --alsologtostderr --kubernetes-version=v1.24.1-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220509082434-6723 --force --alsologtostderr --kubernetes-version=v1.24.1-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.059524248s)
--- PASS: TestDownloadOnly/v1.24.1-rc.0/json-events (3.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.1-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.1-rc.0/binaries
--- PASS: TestDownloadOnly/v1.24.1-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.1-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.1-rc.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220509082434-6723
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220509082434-6723: exit status 85 (74.970835ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/09 08:24:46
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0509 08:24:46.666002    7069 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:24:46.666120    7069 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:24:46.666137    7069 out.go:309] Setting ErrFile to fd 2...
	I0509 08:24:46.666143    7069 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:24:46.666269    7069 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	W0509 08:24:46.666407    7069 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/config/config.json: no such file or directory
	I0509 08:24:46.666564    7069 out.go:303] Setting JSON to true
	I0509 08:24:46.667365    7069 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":441,"bootTime":1652084246,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 08:24:46.667433    7069 start.go:125] virtualization: kvm guest
	I0509 08:24:46.670008    7069 out.go:97] [download-only-20220509082434-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0509 08:24:46.670153    7069 notify.go:193] Checking for updates...
	I0509 08:24:46.672050    7069 out.go:169] MINIKUBE_LOCATION=14070
	I0509 08:24:46.673674    7069 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 08:24:46.675532    7069 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:24:46.677281    7069 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 08:24:46.679164    7069 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220509082434-6723"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.1-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220509082434-6723
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.48s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220509082450-6723 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220509082450-6723 --force --alsologtostderr --driver=docker  --container-runtime=docker: (1.421354705s)
helpers_test.go:175: Cleaning up "download-docker-20220509082450-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220509082450-6723
--- PASS: TestDownloadOnlyKic (2.48s)

                                                
                                    
x
+
TestBinaryMirror (0.92s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220509082453-6723 --alsologtostderr --binary-mirror http://127.0.0.1:43183 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-20220509082453-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220509082453-6723
--- PASS: TestBinaryMirror (0.92s)

                                                
                                    
x
+
TestOffline (65.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20220509085336-6723 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20220509085336-6723 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m3.013314912s)
helpers_test.go:175: Cleaning up "offline-docker-20220509085336-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20220509085336-6723

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20220509085336-6723: (2.245001731s)
--- PASS: TestOffline (65.26s)

                                                
                                    
x
+
TestAddons/Setup (100.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220509082454-6723 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220509082454-6723 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m40.759150358s)
--- PASS: TestAddons/Setup (100.76s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 9.417462ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-kqqf6" [32385d42-0e88-434c-9d19-dd305fc24814] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008659025s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-bg6kd" [ce2e10d6-c702-46ae-9765-101df2d2ef68] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018544811s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220509082454-6723 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220509082454-6723 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220509082454-6723 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.921933367s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 ip
2022/05/09 08:26:51 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:338: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.79s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (304.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220509082454-6723 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220509082454-6723 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220509082454-6723 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [8b49c07d-35d6-43c2-8d50-b77ebb05b4db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [8b49c07d-35d6-43c2-8d50-b77ebb05b4db] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.008961016s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:236: (dbg) Run:  kubectl --context addons-20220509082454-6723 replace --force -f testdata/ingress-dns-example-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable ingress-dns --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable ingress --alsologtostderr -v=1: (4m50.783642487s)
--- PASS: TestAddons/parallel/Ingress (304.37s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 9.51633ms
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-c7d76457b-4zmrh" [f2654535-302c-4b23-ae19-29761a0720e7] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00868454s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220509082454-6723 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220509082454-6723 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.923008576s)
addons_test.go:440: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 11.944675ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220509082454-6723 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220509082454-6723 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220509082454-6723 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [9a4920ca-05ed-4dc5-90eb-df19ec0f2dca] Pending
helpers_test.go:342: "task-pv-pod" [9a4920ca-05ed-4dc5-90eb-df19ec0f2dca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [9a4920ca-05ed-4dc5-90eb-df19ec0f2dca] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.010704828s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220509082454-6723 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220509082454-6723 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220509082454-6723 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220509082454-6723 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220509082454-6723 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220509082454-6723 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220509082454-6723 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220509082454-6723 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [ff43f315-b160-4d61-9b35-8e0cef4b39ca] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [ff43f315-b160-4d61-9b35-8e0cef4b39ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [ff43f315-b160-4d61-9b35-8e0cef4b39ca] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.006389172s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220509082454-6723 delete pod task-pv-pod-restore
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220509082454-6723 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220509082454-6723 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.903666169s)
addons_test.go:592: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (38.03s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220509082454-6723 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [24b61068-ea00-4de8-86be-3529f97bbb10] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [24b61068-ea00-4de8-86be-3529f97bbb10] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.013712841s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220509082454-6723 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220509082454-6723 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-linux-amd64 -p addons-20220509082454-6723 addons disable gcp-auth --alsologtostderr -v=1: (5.805719347s)
addons_test.go:681: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220509082454-6723 addons enable gcp-auth
addons_test.go:687: (dbg) Run:  kubectl --context addons-20220509082454-6723 apply -f testdata/private-image.yaml
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7c74db7cd9-vfg72" [5e9fc4b4-9c4c-46a5-9808-1cd695afa439] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7c74db7cd9-vfg72" [5e9fc4b4-9c4c-46a5-9808-1cd695afa439] Running
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 13.006846861s
addons_test.go:700: (dbg) Run:  kubectl --context addons-20220509082454-6723 apply -f testdata/private-image-eu.yaml
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-545d57c67f-6rj2x" [5be05fe9-5505-4978-9b33-7a440c0b9c58] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-545d57c67f-6rj2x" [5be05fe9-5505-4978-9b33-7a440c0b9c58] Running
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 9.006359062s
--- PASS: TestAddons/serial/GCPAuth (38.03s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.98s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220509082454-6723
addons_test.go:132: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220509082454-6723: (10.777795181s)
addons_test.go:136: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220509082454-6723
addons_test.go:140: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220509082454-6723
--- PASS: TestAddons/StoppedEnableDisable (10.98s)

                                                
                                    
x
+
TestCertOptions (34.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220509085650-6723 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220509085650-6723 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (30.603298283s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220509085650-6723 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220509085650-6723 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220509085650-6723 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220509085650-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220509085650-6723

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220509085650-6723: (2.58668036s)
--- PASS: TestCertOptions (34.08s)

                                                
                                    
x
+
TestCertExpiration (215.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220509085554-6723 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220509085554-6723 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (28.651505614s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220509085554-6723 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220509085554-6723 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (4.50250376s)
helpers_test.go:175: Cleaning up "cert-expiration-20220509085554-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220509085554-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220509085554-6723: (2.594614153s)
--- PASS: TestCertExpiration (215.75s)

                                                
                                    
x
+
TestDockerFlags (32.41s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20220509085628-6723 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0509 08:56:34.825848    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20220509085628-6723 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (27.577044302s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220509085628-6723 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220509085628-6723 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220509085628-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20220509085628-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20220509085628-6723: (3.581255655s)
--- PASS: TestDockerFlags (32.41s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.66s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.66s)

                                                
                                    
x
+
TestErrorSpam/setup (28.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220509083241-6723 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220509083241-6723 --driver=docker  --container-runtime=docker
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220509083241-6723 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220509083241-6723 --driver=docker  --container-runtime=docker: (28.524113721s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (28.52s)

                                                
                                    
x
+
TestErrorSpam/start (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 start --dry-run
--- PASS: TestErrorSpam/start (1.02s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (11.03s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 stop: (10.757998321s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220509083241-6723 --log_dir /tmp/nospam-20220509083241-6723 stop
--- PASS: TestErrorSpam/stop (11.03s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1784: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/files/etc/test/nested/copy/6723/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2163: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220509083328-6723 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2163: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220509083328-6723 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (44.267989252s)
--- PASS: TestFunctional/serial/StartWithProxy (44.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220509083328-6723 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220509083328-6723 --alsologtostderr -v=8: (5.653312069s)
functional_test.go:658: soft start took 5.653929908s for "functional-20220509083328-6723" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-20220509083328-6723 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-20220509083328-6723 cache add k8s.gcr.io/pause:3.3: (1.0355885s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220509083328-6723 /tmp/TestFunctionalserialCacheCmdcacheadd_local3306560265/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 cache add minikube-local-cache-test:functional-20220509083328-6723
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 cache delete minikube-local-cache-test:functional-20220509083328-6723
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220509083328-6723
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (385.663418ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 kubectl -- --context functional-20220509083328-6723 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-20220509083328-6723 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220509083328-6723 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220509083328-6723 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.544601618s)
functional_test.go:756: restart took 29.544720746s for "functional-20220509083328-6723" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-20220509083328-6723 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-20220509083328-6723 logs: (1.454712134s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 logs --file /tmp/TestFunctionalserialLogsFileCmd3886105153/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-20220509083328-6723 logs --file /tmp/TestFunctionalserialLogsFileCmd3886105153/001/logs.txt: (1.486943872s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220509083328-6723 config get cpus: exit status 14 (83.515309ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220509083328-6723 config get cpus: exit status 14 (76.628865ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220509083328-6723 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220509083328-6723 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 49474: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220509083328-6723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220509083328-6723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (283.862184ms)

                                                
                                                
-- stdout --
	* [functional-20220509083328-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14070
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:35:17.939170   48653 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:35:17.939296   48653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:35:17.939305   48653 out.go:309] Setting ErrFile to fd 2...
	I0509 08:35:17.939312   48653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:35:17.939436   48653 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:35:17.939679   48653 out.go:303] Setting JSON to false
	I0509 08:35:17.940870   48653 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1072,"bootTime":1652084246,"procs":402,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 08:35:17.940944   48653 start.go:125] virtualization: kvm guest
	I0509 08:35:17.944314   48653 out.go:177] * [functional-20220509083328-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0509 08:35:17.946362   48653 out.go:177]   - MINIKUBE_LOCATION=14070
	I0509 08:35:17.947964   48653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 08:35:17.950385   48653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:35:17.951954   48653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 08:35:17.953486   48653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0509 08:35:17.955399   48653 config.go:178] Loaded profile config "functional-20220509083328-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:35:17.955805   48653 driver.go:346] Setting default libvirt URI to qemu:///system
	I0509 08:35:18.010401   48653 docker.go:137] docker version: linux-20.10.15
	I0509 08:35:18.010501   48653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:35:18.137244   48653 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:61 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:40 SystemTime:2022-05-09 08:35:18.043325765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:35:18.137353   48653 docker.go:254] overlay module found
	I0509 08:35:18.140022   48653 out.go:177] * Using the docker driver based on existing profile
	I0509 08:35:18.141938   48653 start.go:284] selected driver: docker
	I0509 08:35:18.141979   48653 start.go:801] validating driver "docker" against &{Name:functional-20220509083328-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:functional-20220509083328-6723 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false
registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0509 08:35:18.142097   48653 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0509 08:35:18.142141   48653 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0509 08:35:18.142162   48653 out.go:239] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0509 08:35:18.144071   48653 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0509 08:35:18.146446   48653 out.go:177] 
	W0509 08:35:18.148654   48653 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0509 08:35:18.150433   48653 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220509083328-6723 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220509083328-6723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220509083328-6723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (269.224051ms)

                                                
                                                
-- stdout --
	* [functional-20220509083328-6723] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14070
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:35:12.456215   46872 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:35:12.456324   46872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:35:12.456328   46872 out.go:309] Setting ErrFile to fd 2...
	I0509 08:35:12.456333   46872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:35:12.456504   46872 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:35:12.456805   46872 out.go:303] Setting JSON to false
	I0509 08:35:12.457973   46872 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1066,"bootTime":1652084246,"procs":395,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1024-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0509 08:35:12.458047   46872 start.go:125] virtualization: kvm guest
	I0509 08:35:12.461067   46872 out.go:177] * [functional-20220509083328-6723] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	I0509 08:35:12.462901   46872 out.go:177]   - MINIKUBE_LOCATION=14070
	I0509 08:35:12.464563   46872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0509 08:35:12.466229   46872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	I0509 08:35:12.467871   46872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	I0509 08:35:12.469349   46872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0509 08:35:12.471548   46872 config.go:178] Loaded profile config "functional-20220509083328-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:35:12.472227   46872 driver.go:346] Setting default libvirt URI to qemu:///system
	I0509 08:35:12.518833   46872 docker.go:137] docker version: linux-20.10.15
	I0509 08:35:12.518962   46872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:35:12.639767   46872 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:61 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-05-09 08:35:12.551327507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:35:12.639965   46872 docker.go:254] overlay module found
	I0509 08:35:12.642703   46872 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0509 08:35:12.644207   46872 start.go:284] selected driver: docker
	I0509 08:35:12.644231   46872 start.go:801] validating driver "docker" against &{Name:functional-20220509083328-6723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.0 ClusterName:functional-20220509083328-6723 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false
registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0509 08:35:12.644384   46872 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0509 08:35:12.644441   46872 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0509 08:35:12.644469   46872 out.go:239] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0509 08:35:12.646280   46872 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0509 08:35:12.648350   46872 out.go:177] 
	W0509 08:35:12.650320   46872 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0509 08:35:12.652100   46872 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1435: (dbg) Run:  kubectl --context functional-20220509083328-6723 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-20220509083328-6723 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-sp8w2" [85d4f645-35d1-4992-a381-c97081bddafd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-sp8w2" [85d4f645-35d1-4992-a381-c97081bddafd] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 7.007903525s
functional_test.go:1451: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1478: found endpoint: https://192.168.49.2:31323
functional_test.go:1493: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1513: found endpoint for hello-node: http://192.168.49.2:31323
--- PASS: TestFunctional/parallel/ServiceCmd (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1561: (dbg) Run:  kubectl --context functional-20220509083328-6723 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1567: (dbg) Run:  kubectl --context functional-20220509083328-6723 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-6hpxz" [b59ac468-206e-426c-a46a-e48e3518d657] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-6hpxz" [b59ac468-206e-426c-a46a-e48e3518d657] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.007857569s
functional_test.go:1581: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1587: found endpoint for hello-node-connect: http://192.168.49.2:32702
functional_test.go:1607: http://192.168.49.2:32702: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-578cdc45cb-6hpxz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32702
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 addons list
functional_test.go:1634: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [3ef5f1b5-2d8f-433c-b574-dda2fd09ef10] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009864242s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220509083328-6723 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220509083328-6723 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220509083328-6723 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220509083328-6723 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220509083328-6723 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220509083328-6723 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220509083328-6723 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [a0f59757-b590-49f5-aa5c-35e05cf78439] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [a0f59757-b590-49f5-aa5c-35e05cf78439] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [a0f59757-b590-49f5-aa5c-35e05cf78439] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007286986s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220509083328-6723 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220509083328-6723 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220509083328-6723 delete -f testdata/storage-provisioner/pod.yaml: (1.264857142s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220509083328-6723 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [4a5f6bc0-d7c3-4929-ae5c-9d3e7b91eafe] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [4a5f6bc0-d7c3-4929-ae5c-9d3e7b91eafe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [4a5f6bc0-d7c3-4929-ae5c-9d3e7b91eafe] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.00932866s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220509083328-6723 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1657: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1674: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh -n functional-20220509083328-6723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 cp functional-20220509083328-6723:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd513572667/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh -n functional-20220509083328-6723 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1722: (dbg) Run:  kubectl --context functional-20220509083328-6723 replace --force -f testdata/mysql.yaml
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-7zxmg" [609a5d8a-3c71-40a2-89c9-0273529ce408] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-7zxmg" [609a5d8a-3c71-40a2-89c9-0273529ce408] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.006532948s
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220509083328-6723 exec mysql-67f7d69d8b-7zxmg -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220509083328-6723 exec mysql-67f7d69d8b-7zxmg -- mysql -ppassword -e "show databases;": exit status 1 (130.846311ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220509083328-6723 exec mysql-67f7d69d8b-7zxmg -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220509083328-6723 exec mysql-67f7d69d8b-7zxmg -- mysql -ppassword -e "show databases;": exit status 1 (148.269357ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220509083328-6723 exec mysql-67f7d69d8b-7zxmg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.18s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1858: Checking for existence of /etc/test/nested/copy/6723/hosts within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo cat /etc/test/nested/copy/6723/hosts"
functional_test.go:1865: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /etc/ssl/certs/6723.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo cat /etc/ssl/certs/6723.pem"
functional_test.go:1901: Checking for existence of /usr/share/ca-certificates/6723.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo cat /usr/share/ca-certificates/6723.pem"
functional_test.go:1901: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1928: Checking for existence of /etc/ssl/certs/67232.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo cat /etc/ssl/certs/67232.pem"
functional_test.go:1928: Checking for existence of /usr/share/ca-certificates/67232.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo cat /usr/share/ca-certificates/67232.pem"
functional_test.go:1928: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220509083328-6723 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo systemctl is-active crio": exit status 1 (454.593248ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220509083328-6723 docker-env) && out/minikube-linux-amd64 status -p functional-20220509083328-6723"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220509083328-6723 docker-env) && out/minikube-linux-amd64 status -p functional-20220509083328-6723": (1.054556209s)
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220509083328-6723 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2185: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 version -o=json --components
2022/05/09 08:35:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Done: out/minikube-linux-amd64 -p functional-20220509083328-6723 version -o=json --components: (1.934279283s)
--- PASS: TestFunctional/parallel/Version/components (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.0
k8s.gcr.io/kube-proxy:v1.24.0
k8s.gcr.io/kube-controller-manager:v1.24.0
k8s.gcr.io/kube-apiserver:v1.24.0
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220509083328-6723
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-20220509083328-6723
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/library/nginx                     | alpine                         | 51696c87e77e4 | 23.4MB |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-20220509083328-6723 | 2653b2797e01d | 30B    |
| k8s.gcr.io/kube-proxy                       | v1.24.0                        | 77b49675beae1 | 110MB  |
| k8s.gcr.io/kube-controller-manager          | v1.24.0                        | 88784fb4ac2f6 | 119MB  |
| docker.io/library/nginx                     | latest                         | fa5269854a5e6 | 142MB  |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| k8s.gcr.io/pause                            | 3.7                            | 221177c6082a8 | 711kB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>                         | 7801cfc6d5c07 | 34.4MB |
| gcr.io/google-containers/addon-resizer      | functional-20220509083328-6723 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/kube-apiserver                   | v1.24.0                        | 529072250ccc6 | 130MB  |
| k8s.gcr.io/kube-scheduler                   | v1.24.0                        | e3ed7dee73e93 | 51MB   |
| k8s.gcr.io/etcd                             | 3.5.3-0                        | aebe758cef4cd | 299MB  |
| docker.io/kubernetesui/dashboard            | <none>                         | 7fff914c4a615 | 243MB  |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls --format json:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"2653b2797e01d9ae47cbd29984a8203b8f30f667219e8bdc6f2365022b80ce50","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220509083328-6723"],"size":"30"},{"id":"88784fb4ac2f696b8fed607f6aa8bd5710544652f4ca166462937a36368f6364","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.0"],"size":"119000000"},{"id":"e3ed7dee73e9341d613017a135d2e8e6f169b16ffdcf0564a67147aef941322d","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.0"],"size":"51000000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"34400000"},{"id":"0184c1613d929311
26feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220509083328-6723"],"size":"32900000"},{"id":"529072250ccc6301cb341cd7359c9641b94a6f837f86f82b1223a59bb0712e64","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.0"],"size":"130000000"},{"id":"fa5269854a5e615e51a72b17ad3fd1e01268f278a6684c8ed3c5f0cdce3f230b","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2
b","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":"299000000"},{"id":"51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"711000"},{"id":"77b49675beae1d7a23dbd96d367e8ae0fd3318631f270455e0c3e5e771232505","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.0"],"size":"110000000"},{"id":"7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"243000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls --format yaml:
- id: 77b49675beae1d7a23dbd96d367e8ae0fd3318631f270455e0c3e5e771232505
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.0
size: "110000000"
- id: fa5269854a5e615e51a72b17ad3fd1e01268f278a6684c8ed3c5f0cdce3f230b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "299000000"
- id: 51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "243000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 529072250ccc6301cb341cd7359c9641b94a6f837f86f82b1223a59bb0712e64
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.0
size: "130000000"
- id: e3ed7dee73e9341d613017a135d2e8e6f169b16ffdcf0564a67147aef941322d
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.0
size: "51000000"
- id: 88784fb4ac2f696b8fed607f6aa8bd5710544652f4ca166462937a36368f6364
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.0
size: "119000000"
- id: 221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.7
size: "711000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220509083328-6723
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 2653b2797e01d9ae47cbd29984a8203b8f30f667219e8bdc6f2365022b80ce50
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220509083328-6723
size: "30"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "34400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh pgrep buildkitd: exit status 1 (497.090268ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image build -t localhost/my-image:functional-20220509083328-6723 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220509083328-6723 image build -t localhost/my-image:functional-20220509083328-6723 testdata/build: (4.772774641s)
functional_test.go:315: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220509083328-6723 image build -t localhost/my-image:functional-20220509083328-6723 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 33813b48312d
Removing intermediate container 33813b48312d
---> 1d0c486be1b6
Step 3/3 : ADD content.txt /
---> fd02e338d0f1
Successfully built fd02e338d0f1
Successfully tagged localhost/my-image:functional-20220509083328-6723
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220509083328-6723
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220509083328-6723 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220509083328-6723 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [4aaa59bb-0bef-403b-8213-a70841025671] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [4aaa59bb-0bef-403b-8213-a70841025671] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [4aaa59bb-0bef-403b-8213-a70841025671] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.013183828s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1313: Took "496.118262ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "70.712054ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220509083328-6723

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220509083328-6723 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220509083328-6723: (4.529021327s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.84s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1364: Took "436.818837ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1377: Took "67.411193ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220509083328-6723

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220509083328-6723 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220509083328-6723: (3.080441436s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220509083328-6723
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220509083328-6723

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220509083328-6723 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220509083328-6723: (3.179499139s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220509083328-6723 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.107.74.32 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220509083328-6723 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image save gcr.io/google-containers/addon-resizer:functional-20220509083328-6723 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image rm gcr.io/google-containers/addon-resizer:functional-20220509083328-6723
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-linux-amd64 -p functional-20220509083328-6723 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.008531988s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220509083328-6723 /tmp/TestFunctionalparallelMountCmdany-port938809965/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1652085312656433114" to /tmp/TestFunctionalparallelMountCmdany-port938809965/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1652085312656433114" to /tmp/TestFunctionalparallelMountCmdany-port938809965/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1652085312656433114" to /tmp/TestFunctionalparallelMountCmdany-port938809965/001/test-1652085312656433114
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (425.53295ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May  9 08:35 created-by-test
-rw-r--r-- 1 docker docker 24 May  9 08:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May  9 08:35 test-1652085312656433114
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh cat /mount-9p/test-1652085312656433114

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220509083328-6723 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [325e4685-3ec8-44f5-b1cc-0a1990cf2b87] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [325e4685-3ec8-44f5-b1cc-0a1990cf2b87] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [325e4685-3ec8-44f5-b1cc-0a1990cf2b87] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [325e4685-3ec8-44f5-b1cc-0a1990cf2b87] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [325e4685-3ec8-44f5-b1cc-0a1990cf2b87] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.010033864s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220509083328-6723 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220509083328-6723 /tmp/TestFunctionalparallelMountCmdany-port938809965/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220509083328-6723
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220509083328-6723

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220509083328-6723 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220509083328-6723: (2.773193445s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220509083328-6723
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220509083328-6723 /tmp/TestFunctionalparallelMountCmdspecific-port2419158319/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (560.505707ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220509083328-6723 /tmp/TestFunctionalparallelMountCmdspecific-port2419158319/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh "sudo umount -f /mount-9p": exit status 1 (493.024503ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220509083328-6723 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220509083328-6723 /tmp/TestFunctionalparallelMountCmdspecific-port2419158319/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.88s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220509083328-6723
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220509083328-6723
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220509083328-6723
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (93.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220509083550-6723 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0509 08:36:34.826191    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:34.831813    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:34.842132    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:34.862441    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:34.902749    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:34.983103    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:35.143566    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:35.464445    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:36.104913    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:37.385817    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:39.946814    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:45.067148    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:36:55.308180    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:37:15.788442    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220509083550-6723 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m33.63111244s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (93.63s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220509083550-6723 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220509083550-6723 addons enable ingress --alsologtostderr -v=5: (12.26587843s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220509083550-6723 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (42.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220509083550-6723 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220509083550-6723 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.284080363s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220509083550-6723 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220509083550-6723 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [0ce6de95-5d72-4273-a514-d9fc4cb9b7b0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [0ce6de95-5d72-4273-a514-d9fc4cb9b7b0] Running
E0509 08:37:56.748771    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.007392439s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220509083550-6723 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context ingress-addon-legacy-20220509083550-6723 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220509083550-6723 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220509083550-6723 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220509083550-6723 addons disable ingress-dns --alsologtostderr -v=1: (12.339474463s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220509083550-6723 addons disable ingress --alsologtostderr -v=1
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220509083550-6723 addons disable ingress --alsologtostderr -v=1: (7.30162562s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (42.31s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220509083821-6723 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220509083821-6723 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (43.310480105s)
--- PASS: TestJSONOutput/start/Command (43.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220509083821-6723 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220509083821-6723 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220509083821-6723 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220509083821-6723 --output=json --user=testUser: (11.017858343s)
--- PASS: TestJSONOutput/stop/Command (11.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.31s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220509083919-6723 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220509083919-6723 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.276976ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1fe4f1be-98ec-40db-a5cc-e258bdc6c294","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220509083919-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e774798-9d6e-4b33-bf7e-26e058315e88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14070"}}
	{"specversion":"1.0","id":"f4324a65-ec6e-4c5a-ba6b-a12bdba96cf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0bd9091a-77db-4d68-9375-8e332083f00c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig"}}
	{"specversion":"1.0","id":"305f17cf-69b8-4986-bae7-fcbdef3b7c91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube"}}
	{"specversion":"1.0","id":"38109a7d-4e5d-47c2-87ac-efc81ab1db35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7d78e604-ceff-46c9-b4a9-8e69144598b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220509083919-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220509083919-6723
--- PASS: TestErrorJSONOutput (0.31s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220509083919-6723 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220509083919-6723 --network=: (29.99792118s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220509083919-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220509083919-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220509083919-6723: (2.256969111s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220509083951-6723 --network=bridge
E0509 08:39:58.160046    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:39:58.165460    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:39:58.175818    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:39:58.196247    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:39:58.236638    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:39:58.317034    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:39:58.477502    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:39:58.798154    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:39:59.439085    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:40:00.719467    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:40:03.280017    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:40:08.400600    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:40:18.641552    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220509083951-6723 --network=bridge: (29.819595877s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220509083951-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220509083951-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220509083951-6723: (2.071643423s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.92s)

                                                
                                    
x
+
TestKicExistingNetwork (30.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220509084023-6723 --network=existing-network
E0509 08:40:39.121849    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220509084023-6723 --network=existing-network: (28.121156092s)
helpers_test.go:175: Cleaning up "existing-network-20220509084023-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220509084023-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220509084023-6723: (2.310810704s)
--- PASS: TestKicExistingNetwork (30.67s)

                                                
                                    
x
+
TestKicCustomSubnet (30.31s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220509084054-6723 --subnet=192.168.60.0/24
E0509 08:41:20.082854    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220509084054-6723 --subnet=192.168.60.0/24: (27.919127101s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220509084054-6723 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220509084054-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220509084054-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220509084054-6723: (2.352604778s)
--- PASS: TestKicCustomSubnet (30.31s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220509084124-6723 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220509084124-6723 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.988679716s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220509084124-6723 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220509084124-6723 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0509 08:41:34.825907    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220509084124-6723 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.702265242s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220509084124-6723 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.81s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220509084124-6723 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220509084124-6723 --alsologtostderr -v=5: (1.806699224s)
--- PASS: TestMountStart/serial/DeleteFirst (1.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220509084124-6723 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220509084124-6723
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220509084124-6723: (1.279410083s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220509084124-6723
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220509084124-6723: (5.910524524s)
--- PASS: TestMountStart/serial/RestartStopped (6.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220509084124-6723 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220509084149-6723 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0509 08:42:02.509499    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
E0509 08:42:36.589855    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:36.595182    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:36.605514    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:36.625790    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:36.666166    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:36.747328    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:36.907772    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:37.228355    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:37.869002    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:39.149186    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:41.709541    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:42.003960    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:42:46.830235    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:42:57.071224    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:43:17.552437    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220509084149-6723 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m38.158061574s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- rollout status deployment/busybox: (2.275087026s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- exec busybox-d46db594c-q7dgx -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- exec busybox-d46db594c-rj7z5 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- exec busybox-d46db594c-q7dgx -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- exec busybox-d46db594c-rj7z5 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- exec busybox-d46db594c-q7dgx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- exec busybox-d46db594c-rj7z5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- exec busybox-d46db594c-q7dgx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- exec busybox-d46db594c-q7dgx -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- exec busybox-d46db594c-rj7z5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220509084149-6723 -- exec busybox-d46db594c-rj7z5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220509084149-6723 -v 3 --alsologtostderr
E0509 08:43:58.513190    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220509084149-6723 -v 3 --alsologtostderr: (46.88111992s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.68s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp testdata/cp-test.txt multinode-20220509084149-6723:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp multinode-20220509084149-6723:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile551100132/001/cp-test_multinode-20220509084149-6723.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp multinode-20220509084149-6723:/home/docker/cp-test.txt multinode-20220509084149-6723-m02:/home/docker/cp-test_multinode-20220509084149-6723_multinode-20220509084149-6723-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m02 "sudo cat /home/docker/cp-test_multinode-20220509084149-6723_multinode-20220509084149-6723-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp multinode-20220509084149-6723:/home/docker/cp-test.txt multinode-20220509084149-6723-m03:/home/docker/cp-test_multinode-20220509084149-6723_multinode-20220509084149-6723-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m03 "sudo cat /home/docker/cp-test_multinode-20220509084149-6723_multinode-20220509084149-6723-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp testdata/cp-test.txt multinode-20220509084149-6723-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp multinode-20220509084149-6723-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile551100132/001/cp-test_multinode-20220509084149-6723-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp multinode-20220509084149-6723-m02:/home/docker/cp-test.txt multinode-20220509084149-6723:/home/docker/cp-test_multinode-20220509084149-6723-m02_multinode-20220509084149-6723.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723 "sudo cat /home/docker/cp-test_multinode-20220509084149-6723-m02_multinode-20220509084149-6723.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp multinode-20220509084149-6723-m02:/home/docker/cp-test.txt multinode-20220509084149-6723-m03:/home/docker/cp-test_multinode-20220509084149-6723-m02_multinode-20220509084149-6723-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m03 "sudo cat /home/docker/cp-test_multinode-20220509084149-6723-m02_multinode-20220509084149-6723-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp testdata/cp-test.txt multinode-20220509084149-6723-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp multinode-20220509084149-6723-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile551100132/001/cp-test_multinode-20220509084149-6723-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp multinode-20220509084149-6723-m03:/home/docker/cp-test.txt multinode-20220509084149-6723:/home/docker/cp-test_multinode-20220509084149-6723-m03_multinode-20220509084149-6723.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723 "sudo cat /home/docker/cp-test_multinode-20220509084149-6723-m03_multinode-20220509084149-6723.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 cp multinode-20220509084149-6723-m03:/home/docker/cp-test.txt multinode-20220509084149-6723-m02:/home/docker/cp-test_multinode-20220509084149-6723-m03_multinode-20220509084149-6723-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 ssh -n multinode-20220509084149-6723-m02 "sudo cat /home/docker/cp-test_multinode-20220509084149-6723-m03_multinode-20220509084149-6723-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220509084149-6723 node stop m03: (1.294816627s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220509084149-6723 status: exit status 7 (639.868676ms)

                                                
                                                
-- stdout --
	multinode-20220509084149-6723
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220509084149-6723-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220509084149-6723-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220509084149-6723 status --alsologtostderr: exit status 7 (655.571033ms)

                                                
                                                
-- stdout --
	multinode-20220509084149-6723
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220509084149-6723-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220509084149-6723-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:44:36.363080  106247 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:44:36.363216  106247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:44:36.363234  106247 out.go:309] Setting ErrFile to fd 2...
	I0509 08:44:36.363241  106247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:44:36.363369  106247 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:44:36.363584  106247 out.go:303] Setting JSON to false
	I0509 08:44:36.363611  106247 mustload.go:65] Loading cluster: multinode-20220509084149-6723
	I0509 08:44:36.363924  106247 config.go:178] Loaded profile config "multinode-20220509084149-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:44:36.363942  106247 status.go:253] checking status of multinode-20220509084149-6723 ...
	I0509 08:44:36.364369  106247 cli_runner.go:164] Run: docker container inspect multinode-20220509084149-6723 --format={{.State.Status}}
	I0509 08:44:36.398992  106247 status.go:328] multinode-20220509084149-6723 host status = "Running" (err=<nil>)
	I0509 08:44:36.399029  106247 host.go:66] Checking if "multinode-20220509084149-6723" exists ...
	I0509 08:44:36.399293  106247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220509084149-6723
	I0509 08:44:36.434661  106247 host.go:66] Checking if "multinode-20220509084149-6723" exists ...
	I0509 08:44:36.435408  106247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 08:44:36.435481  106247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220509084149-6723
	I0509 08:44:36.474606  106247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49217 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/multinode-20220509084149-6723/id_rsa Username:docker}
	I0509 08:44:36.565766  106247 ssh_runner.go:195] Run: systemctl --version
	I0509 08:44:36.569978  106247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0509 08:44:36.580577  106247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0509 08:44:36.695734  106247 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:60 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-09 08:44:36.611358761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1024-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0509 08:44:36.696581  106247 kubeconfig.go:92] found "multinode-20220509084149-6723" server: "https://192.168.49.2:8443"
	I0509 08:44:36.696638  106247 api_server.go:165] Checking apiserver status ...
	I0509 08:44:36.696679  106247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0509 08:44:36.706770  106247 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1727/cgroup
	I0509 08:44:36.714851  106247 api_server.go:181] apiserver freezer: "6:freezer:/docker/8f498f6a5062fb2a618e232ffb5bd209e0604e2e34d5decb8fa043becddd8c95/kubepods/burstable/podc37c1e6fb827aa74d7717356390acc47/3337e02841df1df961403ea28ed7f056ffc52c902fd8e5908a1737f331aa9256"
	I0509 08:44:36.714914  106247 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8f498f6a5062fb2a618e232ffb5bd209e0604e2e34d5decb8fa043becddd8c95/kubepods/burstable/podc37c1e6fb827aa74d7717356390acc47/3337e02841df1df961403ea28ed7f056ffc52c902fd8e5908a1737f331aa9256/freezer.state
	I0509 08:44:36.721901  106247 api_server.go:203] freezer state: "THAWED"
	I0509 08:44:36.721947  106247 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0509 08:44:36.727518  106247 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0509 08:44:36.727552  106247 status.go:419] multinode-20220509084149-6723 apiserver status = Running (err=<nil>)
	I0509 08:44:36.727565  106247 status.go:255] multinode-20220509084149-6723 status: &{Name:multinode-20220509084149-6723 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0509 08:44:36.727592  106247 status.go:253] checking status of multinode-20220509084149-6723-m02 ...
	I0509 08:44:36.727875  106247 cli_runner.go:164] Run: docker container inspect multinode-20220509084149-6723-m02 --format={{.State.Status}}
	I0509 08:44:36.762283  106247 status.go:328] multinode-20220509084149-6723-m02 host status = "Running" (err=<nil>)
	I0509 08:44:36.762313  106247 host.go:66] Checking if "multinode-20220509084149-6723-m02" exists ...
	I0509 08:44:36.762609  106247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220509084149-6723-m02
	I0509 08:44:36.796535  106247 host.go:66] Checking if "multinode-20220509084149-6723-m02" exists ...
	I0509 08:44:36.796876  106247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0509 08:44:36.796922  106247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220509084149-6723-m02
	I0509 08:44:36.830733  106247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/machines/multinode-20220509084149-6723-m02/id_rsa Username:docker}
	I0509 08:44:36.913428  106247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0509 08:44:36.923706  106247 status.go:255] multinode-20220509084149-6723-m02 status: &{Name:multinode-20220509084149-6723-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0509 08:44:36.923753  106247 status.go:253] checking status of multinode-20220509084149-6723-m03 ...
	I0509 08:44:36.923993  106247 cli_runner.go:164] Run: docker container inspect multinode-20220509084149-6723-m03 --format={{.State.Status}}
	I0509 08:44:36.957884  106247 status.go:328] multinode-20220509084149-6723-m03 host status = "Stopped" (err=<nil>)
	I0509 08:44:36.957908  106247 status.go:341] host is not running, skipping remaining checks
	I0509 08:44:36.957913  106247 status.go:255] multinode-20220509084149-6723-m03 status: &{Name:multinode-20220509084149-6723-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.59s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (20.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220509084149-6723 node start m03 --alsologtostderr: (19.945105535s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (20.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (105.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220509084149-6723
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220509084149-6723
E0509 08:44:58.159542    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:45:20.434373    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220509084149-6723: (22.779855966s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220509084149-6723 --wait=true -v=8 --alsologtostderr
E0509 08:45:25.844793    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
E0509 08:46:34.826199    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220509084149-6723 --wait=true -v=8 --alsologtostderr: (1m22.196816128s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220509084149-6723
--- PASS: TestMultiNode/serial/RestartKeepsNodes (105.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220509084149-6723 node delete m03: (4.598874607s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220509084149-6723 stop: (21.599148836s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220509084149-6723 status: exit status 7 (134.218312ms)

                                                
                                                
-- stdout --
	multinode-20220509084149-6723
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220509084149-6723-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220509084149-6723 status --alsologtostderr: exit status 7 (128.332269ms)

                                                
                                                
-- stdout --
	multinode-20220509084149-6723
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220509084149-6723-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0509 08:47:10.037637  121622 out.go:296] Setting OutFile to fd 1 ...
	I0509 08:47:10.037773  121622 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:47:10.037784  121622 out.go:309] Setting ErrFile to fd 2...
	I0509 08:47:10.037791  121622 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0509 08:47:10.037924  121622 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/bin
	I0509 08:47:10.038103  121622 out.go:303] Setting JSON to false
	I0509 08:47:10.038127  121622 mustload.go:65] Loading cluster: multinode-20220509084149-6723
	I0509 08:47:10.038485  121622 config.go:178] Loaded profile config "multinode-20220509084149-6723": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.0
	I0509 08:47:10.038503  121622 status.go:253] checking status of multinode-20220509084149-6723 ...
	I0509 08:47:10.038946  121622 cli_runner.go:164] Run: docker container inspect multinode-20220509084149-6723 --format={{.State.Status}}
	I0509 08:47:10.072273  121622 status.go:328] multinode-20220509084149-6723 host status = "Stopped" (err=<nil>)
	I0509 08:47:10.072299  121622 status.go:341] host is not running, skipping remaining checks
	I0509 08:47:10.072305  121622 status.go:255] multinode-20220509084149-6723 status: &{Name:multinode-20220509084149-6723 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0509 08:47:10.072328  121622 status.go:253] checking status of multinode-20220509084149-6723-m02 ...
	I0509 08:47:10.072571  121622 cli_runner.go:164] Run: docker container inspect multinode-20220509084149-6723-m02 --format={{.State.Status}}
	I0509 08:47:10.106498  121622 status.go:328] multinode-20220509084149-6723-m02 host status = "Stopped" (err=<nil>)
	I0509 08:47:10.106520  121622 status.go:341] host is not running, skipping remaining checks
	I0509 08:47:10.106526  121622 status.go:255] multinode-20220509084149-6723-m02 status: &{Name:multinode-20220509084149-6723-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220509084149-6723 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0509 08:47:36.589733    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:48:04.274924    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220509084149-6723 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (59.847832942s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220509084149-6723 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220509084149-6723
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220509084149-6723-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220509084149-6723-m02 --driver=docker  --container-runtime=docker: exit status 14 (81.902792ms)

                                                
                                                
-- stdout --
	* [multinode-20220509084149-6723-m02] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14070
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220509084149-6723-m02' is duplicated with machine name 'multinode-20220509084149-6723-m02' in profile 'multinode-20220509084149-6723'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220509084149-6723-m03 --driver=docker  --container-runtime=docker
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220509084149-6723-m03 --driver=docker  --container-runtime=docker: (28.672446305s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220509084149-6723
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220509084149-6723: exit status 80 (374.248047ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220509084149-6723
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220509084149-6723-m03 already exists in multinode-20220509084149-6723-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220509084149-6723-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220509084149-6723-m03: (2.35494744s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.55s)

                                                
                                    
x
+
TestPreload (116.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220509084846-6723 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
E0509 08:49:58.159279    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220509084846-6723 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m19.396950516s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220509084846-6723 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220509084846-6723 -- docker pull gcr.io/k8s-minikube/busybox: (1.05816386s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220509084846-6723 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220509084846-6723 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (33.572412709s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220509084846-6723 -- docker images
helpers_test.go:175: Cleaning up "test-preload-20220509084846-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220509084846-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220509084846-6723: (2.42856931s)
--- PASS: TestPreload (116.85s)

                                                
                                    
x
+
TestScheduledStopUnix (101.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220509085043-6723 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220509085043-6723 --memory=2048 --driver=docker  --container-runtime=docker: (28.048613327s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220509085043-6723 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220509085043-6723 -n scheduled-stop-20220509085043-6723
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220509085043-6723 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220509085043-6723 --cancel-scheduled
E0509 08:51:34.825984    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220509085043-6723 -n scheduled-stop-20220509085043-6723
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220509085043-6723
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220509085043-6723 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220509085043-6723
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220509085043-6723: exit status 7 (93.854619ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220509085043-6723
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220509085043-6723 -n scheduled-stop-20220509085043-6723
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220509085043-6723 -n scheduled-stop-20220509085043-6723: exit status 7 (92.223684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220509085043-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220509085043-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220509085043-6723: (1.818358537s)
--- PASS: TestScheduledStopUnix (101.63s)

                                                
                                    
x
+
TestSkaffold (57.94s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:56: (dbg) Run:  /tmp/skaffold.exe3801201690 version
skaffold_test.go:60: skaffold version: v1.38.0
skaffold_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20220509085225-6723 --memory=2600 --driver=docker  --container-runtime=docker
E0509 08:52:36.590651    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
skaffold_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20220509085225-6723 --memory=2600 --driver=docker  --container-runtime=docker: (28.270497449s)
skaffold_test.go:83: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:107: (dbg) Run:  /tmp/skaffold.exe3801201690 run --minikube-profile skaffold-20220509085225-6723 --kube-context skaffold-20220509085225-6723 --status-check=true --port-forward=false --interactive=false
E0509 08:52:57.869910    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
skaffold_test.go:107: (dbg) Done: /tmp/skaffold.exe3801201690 run --minikube-profile skaffold-20220509085225-6723 --kube-context skaffold-20220509085225-6723 --status-check=true --port-forward=false --interactive=false: (16.649767201s)
skaffold_test.go:113: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-7c5b594bdf-w4twp" [83c7e7a0-5b3d-4833-a0f6-a85391beee15] Running
skaffold_test.go:113: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012068087s
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-bb486b968-szzrb" [47c73375-5b7b-409b-8bbf-3b20ab18c934] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006595978s
helpers_test.go:175: Cleaning up "skaffold-20220509085225-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20220509085225-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20220509085225-6723: (2.571549588s)
--- PASS: TestSkaffold (57.94s)

                                                
                                    
x
+
TestInsufficientStorage (13.38s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220509085323-6723 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220509085323-6723 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.797704385s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"54dc0486-7882-408c-8c41-781ca77f69c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220509085323-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1846cb8a-98a3-449a-b344-0f4dfcee2880","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14070"}}
	{"specversion":"1.0","id":"c829869d-5e11-454a-acf5-37d85896988f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"929db232-1340-4fae-955c-350d07c24017","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig"}}
	{"specversion":"1.0","id":"c8d3a783-78c0-4ebb-81fb-b52949dd83d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube"}}
	{"specversion":"1.0","id":"5e580798-8c0a-43ca-afaf-39ceef17347e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3c756092-314d-4b8c-a518-bac98b9d138a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c8ef7c68-bf91-455d-8dd0-6063610b6cda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4d972be9-3ab9-454e-95df-3547bdbc8bc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c6a6f44-e28b-49e3-bfe6-d02a57a3a808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"5dadc7b7-0d58-4032-a199-914046b3fd2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"cb61e134-c084-4242-a4a0-1dd22b6f58ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with the root privilege"}}
	{"specversion":"1.0","id":"398a7627-e01b-4ba3-8461-32cd25c3916e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220509085323-6723 in cluster insufficient-storage-20220509085323-6723","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fc2e9ca-bf37-4a33-8bc7-a5bd65aacc34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1969475-a30b-4fbb-93cc-907ed5054691","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"510a9a90-8ea3-4ab6-8910-5031ed8dfb7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220509085323-6723 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220509085323-6723 --output=json --layout=cluster: exit status 7 (363.502431ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220509085323-6723","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220509085323-6723","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0509 08:53:34.289389  155822 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220509085323-6723" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220509085323-6723 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220509085323-6723 --output=json --layout=cluster: exit status 7 (364.271826ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220509085323-6723","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220509085323-6723","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0509 08:53:34.654049  155935 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220509085323-6723" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	E0509 08:53:34.662907  155935 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/insufficient-storage-20220509085323-6723/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220509085323-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220509085323-6723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220509085323-6723: (1.857755018s)
--- PASS: TestInsufficientStorage (13.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.9.0.3148248443.exe start -p running-upgrade-20220509085501-6723 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.3148248443.exe start -p running-upgrade-20220509085501-6723 --memory=2200 --vm-driver=docker  --container-runtime=docker: (58.693312442s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220509085501-6723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220509085501-6723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.366103769s)
helpers_test.go:175: Cleaning up "running-upgrade-20220509085501-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220509085501-6723

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220509085501-6723: (5.561343223s)
--- PASS: TestRunningBinaryUpgrade (87.02s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.46s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.3597337145.exe start -p missing-upgrade-20220509085336-6723 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.3597337145.exe start -p missing-upgrade-20220509085336-6723 --memory=2200 --driver=docker  --container-runtime=docker: (1m1.622145188s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220509085336-6723

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220509085336-6723: (10.46554292s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220509085336-6723
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220509085336-6723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220509085336-6723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.636588807s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220509085336-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220509085336-6723

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220509085336-6723: (4.234268673s)
--- PASS: TestMissingContainerUpgrade (119.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220509085336-6723 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220509085336-6723 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (92.837586ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220509085336-6723] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14070
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220509085336-6723 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220509085336-6723 --driver=docker  --container-runtime=docker: (48.664317236s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220509085336-6723 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (84.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.2601928495.exe start -p stopped-upgrade-20220509085336-6723 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.2601928495.exe start -p stopped-upgrade-20220509085336-6723 --memory=2200 --vm-driver=docker  --container-runtime=docker: (48.655177269s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.9.0.2601928495.exe -p stopped-upgrade-20220509085336-6723 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.2601928495.exe -p stopped-upgrade-20220509085336-6723 stop: (12.436795756s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220509085336-6723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220509085336-6723 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.26380865s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (84.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220509085336-6723 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220509085336-6723 --no-kubernetes --driver=docker  --container-runtime=docker: (13.131255095s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220509085336-6723 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220509085336-6723 status -o json: exit status 2 (495.444182ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220509085336-6723","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220509085336-6723

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220509085336-6723: (2.323235199s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220509085336-6723 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220509085336-6723 --no-kubernetes --driver=docker  --container-runtime=docker: (7.622926336s)
--- PASS: TestNoKubernetes/serial/Start (7.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220509085336-6723 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220509085336-6723 "sudo systemctl is-active --quiet service kubelet": exit status 1 (493.317781ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220509085336-6723
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220509085336-6723: (1.316815748s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220509085336-6723 --driver=docker  --container-runtime=docker
E0509 08:54:58.159687    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220509085336-6723 --driver=docker  --container-runtime=docker: (6.197223555s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220509085336-6723 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220509085336-6723 "sudo systemctl is-active --quiet service kubelet": exit status 1 (420.209435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220509085336-6723

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220509085336-6723: (1.59340798s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.59s)

                                                
                                    
x
+
TestPause/serial/Start (52.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220509085619-6723 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0509 08:56:21.206578    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220509085619-6723 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (52.152052748s)
--- PASS: TestPause/serial/Start (52.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (122.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220509085700-6723 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220509085700-6723 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m2.357777441s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (122.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220509085619-6723 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220509085619-6723 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (5.850596483s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.86s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220509085619-6723 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220509085619-6723 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220509085619-6723 --output=json --layout=cluster: exit status 2 (520.412843ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220509085619-6723","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220509085619-6723","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.52s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220509085619-6723 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220509085619-6723 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220509085619-6723 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220509085619-6723 --alsologtostderr -v=5: (2.852210803s)
--- PASS: TestPause/serial/DeletePaused (2.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.004141453s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220509085619-6723

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220509085619-6723: exit status 1 (39.636768ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220509085619-6723

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220509085554-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker
E0509 08:57:36.590169    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 08:58:10.538801    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:10.544118    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:10.554416    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:10.575118    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:10.615493    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:10.695840    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:10.856288    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:11.177167    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:11.818231    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:13.099265    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:15.659746    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:20.780016    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220509085554-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (55.598829814s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-2bwnx" [d3f2192d-f081-447a-aaa4-c36469ed4dd6] Running
E0509 08:58:31.020500    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.012429625s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220509085554-6723 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220509085554-6723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-wm4mc" [b3c63e83-eda6-4873-b728-d03871d42ec7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-wm4mc" [b3c63e83-eda6-4873-b728-d03871d42ec7] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.052761136s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220509085554-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220509085554-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (46.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220509085553-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0509 08:58:51.500779    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
E0509 08:58:59.635231    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220509085553-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (46.064686165s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (46.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220509085700-6723 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [9dff0267-67de-410e-a81d-8f6ce7e5b2e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [9dff0267-67de-410e-a81d-8f6ce7e5b2e9] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013823175s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220509085700-6723 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220509085700-6723 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220509085700-6723 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220509085700-6723 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220509085700-6723 --alsologtostderr -v=3: (10.926909122s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220509085700-6723 -n old-k8s-version-20220509085700-6723
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220509085700-6723 -n old-k8s-version-20220509085700-6723: exit status 7 (107.128781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220509085700-6723 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (605.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220509085700-6723 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220509085700-6723 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (10m5.017503919s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220509085700-6723 -n old-k8s-version-20220509085700-6723
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (605.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220509085553-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker
E0509 08:59:32.461204    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220509085553-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (48.889520404s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220509085553-6723 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220509085553-6723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220509085553-6723 replace --force -f testdata/netcat-deployment.yaml: (1.489621505s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-r2frr" [2accdd30-da9e-41eb-a421-95a960c3f274] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-r2frr" [2accdd30-da9e-41eb-a421-95a960c3f274] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.010103671s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220509085553-6723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (44.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20220509085553-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0509 08:59:58.159911    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20220509085553-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (44.777599997s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (44.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220509085553-6723 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220509085553-6723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-jtp8l" [279a9adf-0596-4b4a-894e-cb1f5d82673d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-jtp8l" [279a9adf-0596-4b4a-894e-cb1f5d82673d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.007149392s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220509085553-6723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (93.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220509085554-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220509085554-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m33.088498233s)
--- PASS: TestNetworkPlugins/group/cilium/Start (93.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20220509085553-6723 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220509085553-6723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-xgdnv" [c9631baa-9b88-4a2c-8dea-f80375fe7b29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-xgdnv" [c9631baa-9b88-4a2c-8dea-f80375fe7b29] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.038470424s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220509085553-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (297.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220509085554-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p false-20220509085554-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (4m57.241229751s)
--- PASS: TestNetworkPlugins/group/false/Start (297.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-db8fx" [65bfcaf3-d287-4f71-95b3-c6df56993a48] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014779276s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220509085554-6723 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220509085554-6723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-lsnl9" [bd8831c6-e834-4168-ae4b-aecd40f7967d] Pending
helpers_test.go:342: "netcat-869c55b6dc-lsnl9" [bd8831c6-e834-4168-ae4b-aecd40f7967d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-lsnl9" [bd8831c6-e834-4168-ae4b-aecd40f7967d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.008124716s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220509085554-6723 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220509085554-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220509085554-6723 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (53.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220509085554-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker
E0509 09:02:36.589883    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/ingress-addon-legacy-20220509083550-6723/client.crt: no such file or directory
E0509 09:03:10.539188    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/skaffold-20220509085225-6723/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20220509085554-6723 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: (53.957455372s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (53.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20220509085554-6723 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context custom-weave-20220509085554-6723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-sks7n" [a5395a4f-2691-4efb-a44d-1960b703c99c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-sks7n" [a5395a4f-2691-4efb-a44d-1960b703c99c] Running
E0509 09:03:28.541269    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:03:28.546576    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:03:28.556912    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:03:28.577222    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:03:28.617496    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:03:28.697830    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:03:28.858217    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:03:29.179313    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:03:29.820232    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
E0509 09:03:31.100503    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 9.011601845s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20220509085554-6723 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220509085554-6723 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-blljk" [0d0b62b7-9a38-4dcf-8ae9-f22f72cd2053] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0509 09:06:12.384310    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kindnet-20220509085554-6723/client.crt: no such file or directory
helpers_test.go:342: "netcat-869c55b6dc-blljk" [0d0b62b7-9a38-4dcf-8ae9-f22f72cd2053] Running
E0509 09:06:17.063229    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/kubenet-20220509085553-6723/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.007117933s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-62kxs" [89e4187b-cc19-49b5-9931-142ca1975666] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012975032s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-62kxs" [89e4187b-cc19-49b5-9931-142ca1975666] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0509 09:09:36.532194    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/enable-default-cni-20220509085553-6723/client.crt: no such file or directory
E0509 09:09:37.870563    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/addons-20220509082454-6723/client.crt: no such file or directory
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006484106s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220509085700-6723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220509085700-6723 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220509085700-6723 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220509085700-6723 -n old-k8s-version-20220509085700-6723
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220509085700-6723 -n old-k8s-version-20220509085700-6723: exit status 2 (437.519999ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220509085700-6723 -n old-k8s-version-20220509085700-6723
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220509085700-6723 -n old-k8s-version-20220509085700-6723: exit status 2 (474.085447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220509085700-6723 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220509085700-6723 -n old-k8s-version-20220509085700-6723
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220509085700-6723 -n old-k8s-version-20220509085700-6723
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.61s)
E0509 09:09:50.599748    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/cilium-20220509085554-6723/client.crt: no such file or directory
E0509 09:09:58.159530    6723 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14070-3366-00cd5342a55ca888d8306eb2334aa46bcc205630/.minikube/profiles/functional-20220509083328-6723/client.crt: no such file or directory

                                                
                                    

Test skip (20/288)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.24.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.1-rc.0/preload-exists (0.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.1-rc.0/preload-exists
aaa_download_only_test.go:111: No preload image
--- SKIP: TestDownloadOnly/v1.24.1-rc.0/preload-exists (0.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.1-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.1-rc.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.24.1-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220509085727-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220509085727-6723
--- SKIP: TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220509085553-6723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220509085553-6723
--- SKIP: TestNetworkPlugins/group/flannel (0.37s)

                                                
                                    
Copied to clipboard