Test Report: Docker_Linux 10730

                    
                      3063e9e720f8ac1d763b520e496d37888b9d0281
                    
                

Test fail (2/241)

Order failed test Duration
32 TestAddons/parallel/GCPAuth 16.4
40 TestErrorSpam 74.48
x
+
TestAddons/parallel/GCPAuth (16.4s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:570: (dbg) Run:  kubectl --context addons-20210310004204-1084876 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:576: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [066317cf-b496-4329-b73f-29969f50f4d9] Pending
helpers_test.go:335: "busybox" [066317cf-b496-4329-b73f-29969f50f4d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [066317cf-b496-4329-b73f-29969f50f4d9] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:576: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 9.045320876s
addons_test.go:582: (dbg) Run:  kubectl --context addons-20210310004204-1084876 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:582: (dbg) Non-zero exit: kubectl --context addons-20210310004204-1084876 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS": exit status 1 (211.270025ms)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
addons_test.go:584: printenv creds: exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestAddons/parallel/GCPAuth]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect addons-20210310004204-1084876
helpers_test.go:231: (dbg) docker inspect addons-20210310004204-1084876:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ffeb8905798f959bf99c6fd30329a57c30282331a37dedda1018ba92dbf498a",
	        "Created": "2021-03-10T00:42:07.596078237Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1086503,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T00:42:08.135369725Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/8ffeb8905798f959bf99c6fd30329a57c30282331a37dedda1018ba92dbf498a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ffeb8905798f959bf99c6fd30329a57c30282331a37dedda1018ba92dbf498a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ffeb8905798f959bf99c6fd30329a57c30282331a37dedda1018ba92dbf498a/hosts",
	        "LogPath": "/var/lib/docker/containers/8ffeb8905798f959bf99c6fd30329a57c30282331a37dedda1018ba92dbf498a/8ffeb8905798f959bf99c6fd30329a57c30282331a37dedda1018ba92dbf498a-json.log",
	        "Name": "/addons-20210310004204-1084876",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210310004204-1084876:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210310004204-1084876",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/74dca3344766ad51cfa59d45b33f265a63d0bc70d1bb15579e722aa4f897156e-init/diff:/var/lib/docker/overlay2/28b47fe487a6db3251353ede3b6f69e6964a6f2abeebaa30c0ad1d1e78d6d00a/diff:/var/lib/docker/overlay2/29f807c33e13b428dfb88e0079cb48053d52bc476ea5072dee137978cb12d04a/diff:/var/lib/docker/overlay2/dac80af91649d325e8284c69485d6ff878d5853f575704daf3e34a558a4dda4f/diff:/var/lib/docker/overlay2/df0ce8e6141fb84ed6e57b1c2688a69b8eb9a17fd5ee143949e1fdaed4e2127b/diff:/var/lib/docker/overlay2/aeddbcd65ba884bcaec5c4d9ed1d4d7786ab18c2b63df71ad64387fe05e81f1d/diff:/var/lib/docker/overlay2/7b1d7e6c08ca72dcd115aafabcf39fc5bf8c7ebfef24cea5afd72de3e3aaef74/diff:/var/lib/docker/overlay2/e172241d5c67cd99e30286314f9a7e0bfdbe98e533ace6c30b573c8e7016a37c/diff:/var/lib/docker/overlay2/b92bddb174c1c73ced52b390e387b906f0333b7864874fd7b14b4a81995084e2/diff:/var/lib/docker/overlay2/592238ad80762d7c7fad92dccc0dca54b900e705d75280eab248a5cd75f9e0c9/diff:/var/lib/docker/overlay2/3703a1
9c7e2d92b4b1aa0e6ed88a22e60ae5e4734d51a6ee4120ffb3fd44cedd/diff:/var/lib/docker/overlay2/026c3575d0e91a7ca6ffeac4648df1b4810fd709c6e2cca8baaa56f1240d373a/diff:/var/lib/docker/overlay2/26f9dc404e831d46f04fc64d90165fcb6cf2b626f20d5c6f3c4d192330974443/diff:/var/lib/docker/overlay2/1d4aa7eb8e0fd341ce63a7e0ca03271806a93d7b3ff5f68421a54114f7db7920/diff:/var/lib/docker/overlay2/262ecf385929e321ea03edb42a15ed2009ddac8fe3e6370e83fbb48c9cf2a5a8/diff:/var/lib/docker/overlay2/437e5fda1fb7c52e890750e7d99942571a65211a4d0aeca3e47a312c037ce50c/diff:/var/lib/docker/overlay2/c49137c10ad9355ca71ee15d51fa243c0c5677d7cfc5be7e91e3b6a41f147a44/diff:/var/lib/docker/overlay2/2df3c6c6f614eb15d222c1928d20367e93571cdcc98fce5703c321bbc9e89ada/diff:/var/lib/docker/overlay2/4223138719a89216f8b18bd8209459f6d9da0eef8e14f421b9ac14497e6303fe/diff:/var/lib/docker/overlay2/8c322e276775bec279ce519ab64fdc5d72374dd59f193b4e1f1c64b169dbe95c/diff:/var/lib/docker/overlay2/8835de952c31ba4fb601f762e9fe01ff4f63c9a70cd4cbb66aa33f53f0b6ec65/diff:/var/lib/d
ocker/overlay2/e4d38c30d6aa80c930dc3bfc34876ed425d6f4e5cfa9a2bcb9c79003aaea69ce/diff:/var/lib/docker/overlay2/8b23f70785fc8c9ad799398a641dacde1831c1e8b8902353d8de6fe2df541e91/diff:/var/lib/docker/overlay2/b85b76d6ce7303e7b59902f25ce2b403c9ae01301bbdb51f3c9987b54aa8fab2/diff:/var/lib/docker/overlay2/70e3155bfae885c5e656de33a3952490499dc2d41b3f86d8220b493291996885/diff:/var/lib/docker/overlay2/9b5cbb5d27c2d34162d8b38e5d6585f627dec3775c3017d6ad087f013c951f9d/diff:/var/lib/docker/overlay2/2110e17f930b05e90bca2794e63a1f7910bf640c30fd026509744e74ef97d506/diff:/var/lib/docker/overlay2/790948bf453d8ae59a2cb4892b21787a30df4f980a6e4bf63c5db18db81815ba/diff:/var/lib/docker/overlay2/452aaf1cb28ef124364e99e3ace726f6719100c165f921ee8507f82eb3652e32/diff:/var/lib/docker/overlay2/8502aab369c9748ff5d36f81e9a59c609d5f07f621ee7e01d1f9bd9714381ec6/diff:/var/lib/docker/overlay2/c40c9698a31968efd949a25e8ac993fc7e6185124270c526860ffdbe13a7c356/diff:/var/lib/docker/overlay2/fd2db339b2f787338c29e87998a27397b2d1b6616f3ca8deeacef9be144
d6616/diff:/var/lib/docker/overlay2/57ab026e96dbcabc281c2c582254053bae73a4c69d2eca845047871bd5406288/diff:/var/lib/docker/overlay2/ff280a1ef7fc06c9015daf55cc9e56d3d0818daf0bd8f3d767415cb4681d40cd/diff:/var/lib/docker/overlay2/0409d6000dc4a61c33927b65be0ef24aab292a9b8c7f1156dd59952031ec958a/diff:/var/lib/docker/overlay2/2bb7adee4012b2bbda639d5e5169236c33e1f38bf28e8475d73a21340e2073c4/diff:/var/lib/docker/overlay2/26aa983703a7aa2bc7b698b7fc9efd858ecd26ec7dd93e9d89a75272e577fa9f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/74dca3344766ad51cfa59d45b33f265a63d0bc70d1bb15579e722aa4f897156e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/74dca3344766ad51cfa59d45b33f265a63d0bc70d1bb15579e722aa4f897156e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/74dca3344766ad51cfa59d45b33f265a63d0bc70d1bb15579e722aa4f897156e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210310004204-1084876",
	                "Source": "/var/lib/docker/volumes/addons-20210310004204-1084876/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210310004204-1084876",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210310004204-1084876",
	                "name.minikube.sigs.k8s.io": "addons-20210310004204-1084876",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79ccb920cd50171693cb27b25c2d9890a7bab316b93694350a9e6e873bfaae49",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/79ccb920cd50",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210310004204-1084876": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.205"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8ffeb8905798"
	                    ],
	                    "NetworkID": "97b116c7701132f33343980df4e57ac63fbae241cbbb226a5b02524ead34b4a1",
	                    "EndpointID": "72eaadfe41d0f3a31fb4b3653975618fb0036ca24555a59da109fde9b10db371",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.205",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:cd",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210310004204-1084876 -n addons-20210310004204-1084876

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:240: <<< TestAddons/parallel/GCPAuth FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestAddons/parallel/GCPAuth]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210310004204-1084876 logs -n 25

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p addons-20210310004204-1084876 logs -n 25: (5.31311341s)
helpers_test.go:248: TestAddons/parallel/GCPAuth logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 00:42:08 UTC, end at Wed 2021-03-10 00:44:42 UTC. --
	* Mar 10 00:43:31 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:31.138398945Z" level=warning msg="reference for unknown type: " digest="sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373" remote="quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373"
	* Mar 10 00:43:35 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:35.673886075Z" level=warning msg="reference for unknown type: " digest="sha256:35ead85dd09aa8cc612fdb598d4e0e2f048bef816f1b74df5eeab67cd21b10aa" remote="quay.io/k8scsi/csi-snapshotter@sha256:35ead85dd09aa8cc612fdb598d4e0e2f048bef816f1b74df5eeab67cd21b10aa"
	* Mar 10 00:43:37 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:37.546271590Z" level=error msg="stream copy error: reading from a closed fifo"
	* Mar 10 00:43:37 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:37.546301949Z" level=error msg="stream copy error: reading from a closed fifo"
	* Mar 10 00:43:37 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:37.835155652Z" level=error msg="9c9d97edaecfaae475e84d6aaab83e526815575aff4d25fc87f89e2ef38d0829 cleanup: failed to delete container from containerd: no such container"
	* Mar 10 00:43:37 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:37.835248436Z" level=error msg="Handler for POST /v1.40/containers/9c9d97edaecfaae475e84d6aaab83e526815575aff4d25fc87f89e2ef38d0829/start returned error: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: read init-p: connection reset by peer: unknown"
	* Mar 10 00:43:39 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:39.168316904Z" level=warning msg="reference for unknown type: " digest="sha256:8fcb9472310dd424c4da8ee06ff200b5e6f091dff39a079e470599e4d0dcf328" remote="quay.io/k8scsi/csi-attacher@sha256:8fcb9472310dd424c4da8ee06ff200b5e6f091dff39a079e470599e4d0dcf328"
	* Mar 10 00:43:42 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:42.439140830Z" level=info msg="ignoring event" container=dc7447ab10836b520bf34ffd94f3e46ba1cfb31c50e27dfb38a5f47517b122a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 00:43:42 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:42.935949551Z" level=info msg="ignoring event" container=3207b36c4d96b3fba159704e9d9ed4cb47634bc570a68db2ac16c6e7d83c1f0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 00:43:43 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:43.433652438Z" level=warning msg="reference for unknown type: " digest="sha256:75ad39004ac49267981c9cb3323a7f73f0b203e1c181117363bf215e10144e8a" remote="quay.io/k8scsi/csi-resizer@sha256:75ad39004ac49267981c9cb3323a7f73f0b203e1c181117363bf215e10144e8a"
	* Mar 10 00:43:45 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:45.448503042Z" level=warning msg="reference for unknown type: " digest="sha256:aa223f9df8c1d477a9f2a4a2a7d104561e6d365e54671aacbc770dffcc0683ad" remote="quay.io/k8scsi/hostpathplugin@sha256:aa223f9df8c1d477a9f2a4a2a7d104561e6d365e54671aacbc770dffcc0683ad"
	* Mar 10 00:43:46 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:46.730831486Z" level=warning msg="reference for unknown type: " digest="sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f" remote="us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f"
	* Mar 10 00:43:48 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:48.756533143Z" level=info msg="ignoring event" container=160a12f7ac030aecb431dc4e1d696a9b2fa6a0cca5737b4fe863982c4f57d19c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 00:43:49 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:49.034765170Z" level=info msg="ignoring event" container=99364f78916fe088b0e36926ceb06f1aa641b71156a2df7c62684ed5ba76ee3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 00:43:53 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:53.534885500Z" level=info msg="ignoring event" container=797c3eb377c351ffecff7710995514c576f75d91e674ee2d2f6ac33a8c6ae126 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 00:43:53 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:53.668184299Z" level=info msg="ignoring event" container=7c5e4ff6cc0af181720fed146fc28e354d6d79caf4afa39d1a8dcce1b2b34804 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 00:43:56 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:56.793416210Z" level=warning msg="reference for unknown type: " digest="sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0" remote="quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0"
	* Mar 10 00:43:57 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:43:57.240268020Z" level=warning msg="Error persisting manifest" digest="sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:5aaf812b69ea33e8900a49335843a6689937e8354b0e1157dec5174f7d1c5374, expected sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0: failed precondition" remote="quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0"
	* Mar 10 00:44:00 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:44:00.835255603Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Mar 10 00:44:00 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:44:00.962275854Z" level=warning msg="reference for unknown type: " digest="sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5" remote="quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5"
	* Mar 10 00:44:01 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:44:01.281952171Z" level=warning msg="Error persisting manifest" digest="sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:d175117d581140656af29ab572127883b195b635365717a7571dfa946a7c1f25, expected sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5: failed precondition" remote="quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5"
	* Mar 10 00:44:36 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:44:36.234627925Z" level=info msg="ignoring event" container=a582f80f0c5581c9f0773125032201f205ea6c86a9b76d2d8f0bbb1563971740 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 00:44:36 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:44:36.281029608Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Mar 10 00:44:39 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:44:39.635543469Z" level=info msg="ignoring event" container=d18a6ade3a60f4ec0a804f0ce0ed57846a760a2d20f5dd4f83d2f9d4c3e952a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 00:44:40 addons-20210310004204-1084876 dockerd[451]: time="2021-03-10T00:44:40.252113991Z" level=info msg="ignoring event" container=d6856ab05aeafac14eab1f29e8df3ae307e3a29a4c22eed673c54fe479cae5b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                             CREATED              STATE               NAME                         ATTEMPT             POD ID
	* d18a6ade3a60f       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                               3 seconds ago        Exited              helm-test                    0                   d6856ab05aeaf
	* d8acc04936d18       f1eb4bba1cfa4                                                                                                                     6 seconds ago        Running             registry-server              1                   fe89ce4e8a3f1
	* e2c342ff46372       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1                                                   9 seconds ago        Running             busybox                      0                   0c5e9fe9018d3
	* 8b15b145a8b7b       quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5                              40 seconds ago       Running             liveness-probe               0                   df3e4124342cd
	* a582f80f0c558       quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0   42 seconds ago       Exited              registry-server              0                   fe89ce4e8a3f1
	* 58dd0971b730e       us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f     46 seconds ago       Running             controller                   0                   c57cbbbfdb092
	* acafde4ea89ba       f1bce969b7834                                                                                                                     52 seconds ago       Running             packageserver                0                   4120c674374ad
	* 3a99da564bac0       quay.io/k8scsi/hostpathplugin@sha256:aa223f9df8c1d477a9f2a4a2a7d104561e6d365e54671aacbc770dffcc0683ad                             56 seconds ago       Running             hostpath                     0                   df3e4124342cd
	* 1479c30d79616       quay.io/k8scsi/csi-resizer@sha256:75ad39004ac49267981c9cb3323a7f73f0b203e1c181117363bf215e10144e8a                                57 seconds ago       Running             csi-resizer                  0                   5fecdc0229828
	* 2776daef98f97       quay.io/k8scsi/csi-attacher@sha256:8fcb9472310dd424c4da8ee06ff200b5e6f091dff39a079e470599e4d0dcf328                               59 seconds ago       Running             csi-attacher                 0                   ed8bea76f0305
	* 2d151c63b5d32       f1bce969b7834                                                                                                                     About a minute ago   Running             packageserver                0                   ea3e2acdbc3d5
	* dc7447ab10836       4d4f44df9f905                                                                                                                     About a minute ago   Exited              patch                        2                   3207b36c4d96b
	* 9658584e36e73       quay.io/k8scsi/csi-snapshotter@sha256:35ead85dd09aa8cc612fdb598d4e0e2f048bef816f1b74df5eeab67cd21b10aa                            About a minute ago   Running             csi-snapshotter              0                   29331e5b4c289
	* 0b343444c5656       quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373                            About a minute ago   Running             catalog-operator             0                   62d585df87b2d
	* 612cebdce2c65       quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373                            About a minute ago   Running             olm-operator                 0                   8e771878e9f59
	* 90e635ff84a93       quay.io/k8scsi/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309                  About a minute ago   Running             node-driver-registrar        0                   df3e4124342cd
	* d915219302fff       gcr.io/k8s-staging-sig-storage/csi-provisioner@sha256:8f36191970a82677ffe222007b08395dd7af0a5bb5b93db0e82523b43de2bfb2            About a minute ago   Running             csi-provisioner              0                   002add671876d
	* 83c8dbd87343f       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                About a minute ago   Running             volume-snapshot-controller   0                   3523f3b195734
	* 7bbfe3c17bc34       jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7                              About a minute ago   Exited              create                       0                   6a28dfceebb3b
	* 29d5b4a9ebc5c       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                About a minute ago   Running             volume-snapshot-controller   0                   656fccfa69acf
	* 38aa3b961ad71       gcr.io/kubernetes-helm/tiller@sha256:6003775d503546087266eda39418d221f9afb5ccfe35f637c32a1161619a3f9c                             About a minute ago   Exited              tiller                       0                   d8ec543ab7afd
	* 54042b74af506       gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da              About a minute ago   Running             registry-proxy               0                   5794150e26166
	* b3b74b4daf748       k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892                           About a minute ago   Running             metrics-server               0                   e2033ba7d30b9
	* 02906d8689b6a       registry@sha256:d5459fcb27aecc752520df4b492b08358a1912fcdfa454f7d2101d4b09991daa                                                  About a minute ago   Running             registry                     0                   9d87374a74f8c
	* 63e14282682c0       bfe3a36ebd252                                                                                                                     About a minute ago   Running             coredns                      0                   b2b96f4081b3a
	* e4d14e816543b       85069258b98ac                                                                                                                     About a minute ago   Running             storage-provisioner          0                   696a2dade30be
	* 0bd04dbed7257       43154ddb57a83                                                                                                                     About a minute ago   Running             kube-proxy                   0                   e435da8b373ff
	* d1c1a132de64d       a27166429d98e                                                                                                                     2 minutes ago        Running             kube-controller-manager      0                   9e6c268054ecc
	* 924189b6ebf88       a8c2fdb8bf76e                                                                                                                     2 minutes ago        Running             kube-apiserver               0                   50b0c6719f911
	* fcd904ed98760       0369cf4303ffd                                                                                                                     2 minutes ago        Running             etcd                         0                   db00aefe2f9b4
	* cb950dda75874       ed2c44fbdd78b                                                                                                                     2 minutes ago        Running             kube-scheduler               0                   6d7e8e5c04ecc
	* 
	* ==> coredns [63e14282682c] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               addons-20210310004204-1084876
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=addons-20210310004204-1084876
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=8d9e062aa56d18f701a92d5344bd63e9d7a0bc2e
	*                     minikube.k8s.io/name=addons-20210310004204-1084876
	*                     minikube.k8s.io/updated_at=2021_03_10T00_42_34_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	*                     topology.hostpath.csi/node=addons-20210310004204-1084876
	* Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210310004204-1084876"}
	*                     kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 00:42:31 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  addons-20210310004204-1084876
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 00:44:36 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 00:44:38 +0000   Wed, 10 Mar 2021 00:42:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 00:44:38 +0000   Wed, 10 Mar 2021 00:42:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 00:44:38 +0000   Wed, 10 Mar 2021 00:42:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 00:44:38 +0000   Wed, 10 Mar 2021 00:42:45 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.205
	*   Hostname:    addons-20210310004204-1084876
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30886996Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30886996Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                27a2d8c6-7eee-471e-8387-43555877de18
	*   Boot ID:                    cfed3db4-db6c-4655-8abe-2e1ce08d21a8
	*   Kernel Version:             4.9.0-15-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (29 in total)
	*   Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	*   default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	*   default                     registry-test                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	*   default                     task-pv-pod                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	*   kube-system                 coredns-74ff55c5b-xlj4r                                  100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	*   kube-system                 csi-hostpath-attacher-0                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	*   kube-system                 csi-hostpath-provisioner-0                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	*   kube-system                 csi-hostpath-resizer-0                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	*   kube-system                 csi-hostpath-snapshotter-0                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	*   kube-system                 csi-hostpathplugin-0                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	*   kube-system                 etcd-addons-20210310004204-1084876                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m7s
	*   kube-system                 ingress-nginx-controller-65cf89dc4f-v4q4w                100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         104s
	*   kube-system                 kube-apiserver-addons-20210310004204-1084876             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	*   kube-system                 kube-controller-manager-addons-20210310004204-1084876    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	*   kube-system                 kube-proxy-dmzxd                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	*   kube-system                 kube-scheduler-addons-20210310004204-1084876             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	*   kube-system                 metrics-server-56c4f8c9d6-s86jb                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	*   kube-system                 registry-bqsz2                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	*   kube-system                 registry-proxy-jgjdk                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	*   kube-system                 snapshot-controller-66df655854-bl5n2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	*   kube-system                 snapshot-controller-66df655854-shthb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	*   kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	*   kube-system                 tiller-deploy-7c86b7fbdf-k4qzh                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	*   olm                         catalog-operator-74df75768-s529b                         10m (0%)      0 (0%)      80Mi (0%)        0 (0%)         100s
	*   olm                         olm-operator-5dd79ffdff-rzqnh                            10m (0%)      0 (0%)      160Mi (0%)       0 (0%)         100s
	*   olm                         operatorhubio-catalog-vqwtg                              10m (0%)      100m (1%)   50Mi (0%)        100Mi (0%)     66s
	*   olm                         packageserver-66d48d56bd-6hbq5                           10m (0%)      0 (0%)      50Mi (0%)        0 (0%)         64s
	*   olm                         packageserver-66d48d56bd-vw9gf                           10m (0%)      0 (0%)      50Mi (0%)        0 (0%)         64s
	*   olm                         packageserver-ddbf8bbf7-7mfn7                            10m (0%)      0 (0%)      50Mi (0%)        0 (0%)         54s
	*   olm                         packageserver-ddbf8bbf7-pqnqh                            10m (0%)      0 (0%)      50Mi (0%)        0 (0%)         63s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                920m (11%)  100m (1%)
	*   memory             750Mi (2%)  270Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age   From        Message
	*   ----    ------                   ----  ----        -------
	*   Normal  Starting                 2m7s  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  2m7s  kubelet     Node addons-20210310004204-1084876 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m7s  kubelet     Node addons-20210310004204-1084876 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m7s  kubelet     Node addons-20210310004204-1084876 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             2m7s  kubelet     Node addons-20210310004204-1084876 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  2m7s  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                117s  kubelet     Node addons-20210310004204-1084876 status is now: NodeReady
	*   Normal  Starting                 106s  kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 1e 14 10 08 59 7f 08 06        ..........Y...
	* [  +0.000493] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 3e 39 58 88 7a fe 08 06        ......>9X.z...
	* [  +4.705369] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 22 40 30 c4 fd 2b 08 06        ......"@0..+..
	* [  +2.359038] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +7.044287] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 22 40 30 c4 fd 2b 08 06        ......"@0..+..
	* [  +0.000429] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff ea f1 c6 8a 5d de 08 06        ..........]...
	* [ +12.922287] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +12.180043] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 00:35] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +8.325114] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 00:36] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 00:37] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +24.041254] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 00:39] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8e 4a 5c 40 b6 f3 08 06        .......J\@....
	* [  +0.000007] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff 8e 4a 5c 40 b6 f3 08 06        .......J\@....
	* [  +0.302372] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 f2 a1 0b 5d ea 08 06        ..........]...
	* [ +13.593684] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 00:42] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [fcd904ed9876] <==
	* 2021-03-10 00:42:56.256883 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:263" took too long (112.61522ms) to execute
	* 2021-03-10 00:42:56.640079 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/tiller-clusterrolebinding\" " with result "range_response_count:0 size:5" took too long (101.585148ms) to execute
	* 2021-03-10 00:42:56.640381 W | etcdserver: read-only range request "key:\"/registry/services/specs/kube-system/metrics-server\" " with result "range_response_count:0 size:5" took too long (190.348268ms) to execute
	* 2021-03-10 00:42:56.640651 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/tiller-deploy\" " with result "range_response_count:1 size:4657" took too long (102.341638ms) to execute
	* 2021-03-10 00:43:04.740785 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 00:43:14.732840 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 00:43:24.733259 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 00:43:34.734191 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 00:43:44.733754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 00:43:47.624077 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:21 size:91549" took too long (436.463054ms) to execute
	* 2021-03-10 00:43:47.624317 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/catalogsources/olm/operatorhubio-catalog\" " with result "range_response_count:1 size:2183" took too long (388.929592ms) to execute
	* 2021-03-10 00:43:47.632799 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1332" took too long (274.677172ms) to execute
	* 2021-03-10 00:43:47.632956 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:21 size:91549" took too long (182.321695ms) to execute
	* 2021-03-10 00:43:47.635242 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:21 size:91549" took too long (283.055709ms) to execute
	* 2021-03-10 00:43:54.733323 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 00:44:03.450821 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1127" took too long (181.562903ms) to execute
	* 2021-03-10 00:44:04.689302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 00:44:14.689261 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 00:44:16.926372 W | wal: sync duration of 1.410090316s, expected less than 1s
	* 2021-03-10 00:44:16.927035 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (783.518432ms) to execute
	* 2021-03-10 00:44:16.927262 W | etcdserver: read-only range request "key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (635.974285ms) to execute
	* 2021-03-10 00:44:16.927299 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/snapshot-controller-leader\" " with result "range_response_count:1 size:503" took too long (449.876632ms) to execute
	* 2021-03-10 00:44:16.927346 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (485.096852ms) to execute
	* 2021-03-10 00:44:24.689263 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 00:44:34.689288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  00:44:43 up  4:27,  0 users,  load average: 1.86, 2.14, 3.15
	* Linux addons-20210310004204-1084876 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 (2021-03-08) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [924189b6ebf8] <==
	* I0310 00:43:25.534943       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	* I0310 00:43:25.547417       1 client.go:360] parsed scheme: "endpoint"
	* I0310 00:43:25.547467       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	* I0310 00:43:31.778268       1 client.go:360] parsed scheme: "passthrough"
	* I0310 00:43:31.778314       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 00:43:31.778324       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* W0310 00:43:42.456833       1 handler_proxy.go:102] no RequestInfo found in the context
	* E0310 00:43:42.457159       1 controller.go:116] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	* I0310 00:43:42.457179       1 controller.go:129] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
	* E0310 00:43:44.483232       1 available_controller.go:508] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again
	* E0310 00:43:45.482104       1 controller.go:116] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: error trying to reach service: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "Red Hat, Inc.")
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	* I0310 00:43:45.482133       1 controller.go:129] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
	* I0310 00:44:04.936350       1 client.go:360] parsed scheme: "passthrough"
	* I0310 00:44:04.936403       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 00:44:04.936412       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* E0310 00:44:16.571067       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
	* I0310 00:44:16.571092       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	* I0310 00:44:16.927371       1 trace.go:205] Trace[220703427]: "Create" url:/api/v1/namespaces/olm/events,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.49.205 (10-Mar-2021 00:44:15.842) (total time: 1084ms):
	* Trace[220703427]: ---"Object stored in database" 1084ms (00:44:00.927)
	* Trace[220703427]: [1.084649188s] [1.084649188s] END
	* I0310 00:44:42.734499       1 client.go:360] parsed scheme: "passthrough"
	* I0310 00:44:42.734548       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 00:44:42.734556       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [d1c1a132de64] <==
	* I0310 00:43:24.062723       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
	* I0310 00:43:24.062766       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for volumesnapshots.snapshot.storage.k8s.io
	* I0310 00:43:24.062929       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	* I0310 00:43:24.363190       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 00:43:25.247790       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 00:43:25.648075       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 00:43:28.292945       1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	* I0310 00:43:38.035330       1 event.go:291] "Event occurred" object="olm/packageserver" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set packageserver-66d48d56bd to 2"
	* I0310 00:43:38.045751       1 event.go:291] "Event occurred" object="olm/packageserver-66d48d56bd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: packageserver-66d48d56bd-vw9gf"
	* I0310 00:43:38.052870       1 event.go:291] "Event occurred" object="olm/packageserver-66d48d56bd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: packageserver-66d48d56bd-6hbq5"
	* I0310 00:43:39.564325       1 event.go:291] "Event occurred" object="olm/packageserver" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set packageserver-ddbf8bbf7 to 1"
	* I0310 00:43:39.639330       1 event.go:291] "Event occurred" object="olm/packageserver-ddbf8bbf7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: packageserver-ddbf8bbf7-pqnqh"
	* I0310 00:43:42.642946       1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	* I0310 00:43:48.466608       1 event.go:291] "Event occurred" object="olm/packageserver" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set packageserver-66d48d56bd to 1"
	* I0310 00:43:48.535467       1 event.go:291] "Event occurred" object="olm/packageserver-66d48d56bd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: packageserver-66d48d56bd-6hbq5"
	* I0310 00:43:48.548168       1 event.go:291] "Event occurred" object="olm/packageserver" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set packageserver-ddbf8bbf7 to 2"
	* I0310 00:43:48.551808       1 event.go:291] "Event occurred" object="olm/packageserver-ddbf8bbf7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: packageserver-ddbf8bbf7-7mfn7"
	* I0310 00:43:53.302959       1 event.go:291] "Event occurred" object="olm/packageserver" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set packageserver-66d48d56bd to 0"
	* I0310 00:43:53.338619       1 event.go:291] "Event occurred" object="olm/packageserver-66d48d56bd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: packageserver-66d48d56bd-vw9gf"
	* E0310 00:43:54.350910       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	* I0310 00:44:40.929021       1 event.go:291] "Event occurred" object="default/hpvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	* I0310 00:44:40.929348       1 event.go:291] "Event occurred" object="default/hpvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	* I0310 00:44:41.340106       1 reconciler.go:275] attacherDetacher.AttachVolume started for volume "pvc-b5b99d04-5b8d-40b6-9d4a-6c4815dcf8c9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b") from node "addons-20210310004204-1084876" 
	* I0310 00:44:41.355842       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume "pvc-b5b99d04-5b8d-40b6-9d4a-6c4815dcf8c9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b") from node "addons-20210310004204-1084876" 
	* I0310 00:44:41.356209       1 event.go:291] "Event occurred" object="default/task-pv-pod" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-b5b99d04-5b8d-40b6-9d4a-6c4815dcf8c9\" "
	* 
	* ==> kube-proxy [0bd04dbed725] <==
	* I0310 00:42:56.434327       1 node.go:172] Successfully retrieved node IP: 192.168.49.205
	* I0310 00:42:56.434457       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.205), assume IPv4 operation
	* W0310 00:42:56.647896       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 00:42:56.647999       1 server_others.go:185] Using iptables Proxier.
	* I0310 00:42:56.648322       1 server.go:650] Version: v1.20.2
	* I0310 00:42:56.650377       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I0310 00:42:56.650690       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 00:42:56.650748       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 00:42:56.651105       1 config.go:315] Starting service config controller
	* I0310 00:42:56.651117       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 00:42:56.651804       1 config.go:224] Starting endpoint slice config controller
	* I0310 00:42:56.651823       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 00:42:56.753794       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 00:42:56.753895       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-scheduler [cb950dda7587] <==
	* I0310 00:42:27.237681       1 serving.go:331] Generated self-signed cert in-memory
	* W0310 00:42:31.442341       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W0310 00:42:31.442382       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W0310 00:42:31.442396       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W0310 00:42:31.442407       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I0310 00:42:31.461097       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 00:42:31.461131       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 00:42:31.461617       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I0310 00:42:31.461701       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E0310 00:42:31.462892       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 00:42:31.533774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 00:42:31.533921       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 00:42:31.534058       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 00:42:31.534159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 00:42:31.540263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 00:42:31.540364       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 00:42:31.540382       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 00:42:31.540515       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 00:42:31.540558       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 00:42:31.554872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 00:42:31.554997       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 00:42:32.283803       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 00:42:32.643264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* I0310 00:42:34.661365       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 00:42:08 UTC, end at Wed 2021-03-10 00:44:43 UTC. --
	* Mar 10 00:44:41 addons-20210310004204-1084876 kubelet[2310]: I0310 00:44:41.432962    2310 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-sdsp9" (UniqueName: "kubernetes.io/secret/65760533-48dd-4a07-906c-82bd50b9c83d-default-token-sdsp9") pod "registry-test" (UID: "65760533-48dd-4a07-906c-82bd50b9c83d")
	* Mar 10 00:44:41 addons-20210310004204-1084876 kubelet[2310]: I0310 00:44:41.433076    2310 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-b5b99d04-5b8d-40b6-9d4a-6c4815dcf8c9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b") pod "task-pv-pod" (UID: "1a5b6d85-926f-4839-acce-3bc95a554414")
	* Mar 10 00:44:41 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:41.433207    2310 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b podName: nodeName:}" failed. No retries permitted until 2021-03-10 00:44:41.933145587 +0000 UTC m=+127.619440901 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-b5b99d04-5b8d-40b6-9d4a-6c4815dcf8c9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b\") pod \"task-pv-pod\" (UID: \"1a5b6d85-926f-4839-acce-3bc95a554414\") "
	* Mar 10 00:44:41 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:41.433217    2310 secret.go:195] Couldn't get secret kube-system/tiller-token-gpgsr: secret "tiller-token-gpgsr" not found
	* Mar 10 00:44:41 addons-20210310004204-1084876 kubelet[2310]: I0310 00:44:41.433271    2310 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-sdsp9" (UniqueName: "kubernetes.io/secret/1a5b6d85-926f-4839-acce-3bc95a554414-default-token-sdsp9") pod "task-pv-pod" (UID: "1a5b6d85-926f-4839-acce-3bc95a554414")
	* Mar 10 00:44:41 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:41.433295    2310 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/1c948044-7876-4077-a225-b7964c8e9b4e-tiller-token-gpgsr podName:1c948044-7876-4077-a225-b7964c8e9b4e nodeName:}" failed. No retries permitted until 2021-03-10 00:44:41.933268638 +0000 UTC m=+127.619563955 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"tiller-token-gpgsr\" (UniqueName: \"kubernetes.io/secret/1c948044-7876-4077-a225-b7964c8e9b4e-tiller-token-gpgsr\") pod \"tiller-deploy-7c86b7fbdf-k4qzh\" (UID: \"1c948044-7876-4077-a225-b7964c8e9b4e\") : secret \"tiller-token-gpgsr\" not found"
	* Mar 10 00:44:41 addons-20210310004204-1084876 kubelet[2310]: I0310 00:44:41.935511    2310 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-b5b99d04-5b8d-40b6-9d4a-6c4815dcf8c9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b") pod "task-pv-pod" (UID: "1a5b6d85-926f-4839-acce-3bc95a554414")
	* Mar 10 00:44:41 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:41.935726    2310 secret.go:195] Couldn't get secret kube-system/tiller-token-gpgsr: secret "tiller-token-gpgsr" not found
	* Mar 10 00:44:41 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:41.935758    2310 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b podName: nodeName:}" failed. No retries permitted until 2021-03-10 00:44:42.935699187 +0000 UTC m=+128.621994541 (durationBeforeRetry 1s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-b5b99d04-5b8d-40b6-9d4a-6c4815dcf8c9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b\") pod \"task-pv-pod\" (UID: \"1a5b6d85-926f-4839-acce-3bc95a554414\") "
	* Mar 10 00:44:41 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:41.935850    2310 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/1c948044-7876-4077-a225-b7964c8e9b4e-tiller-token-gpgsr podName:1c948044-7876-4077-a225-b7964c8e9b4e nodeName:}" failed. No retries permitted until 2021-03-10 00:44:42.935819265 +0000 UTC m=+128.622114576 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"tiller-token-gpgsr\" (UniqueName: \"kubernetes.io/secret/1c948044-7876-4077-a225-b7964c8e9b4e-tiller-token-gpgsr\") pod \"tiller-deploy-7c86b7fbdf-k4qzh\" (UID: \"1c948044-7876-4077-a225-b7964c8e9b4e\") : secret \"tiller-token-gpgsr\" not found"
	* Mar 10 00:44:42 addons-20210310004204-1084876 kubelet[2310]: W0310 00:44:42.137902    2310 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/registry-test through plugin: invalid network status for
	* Mar 10 00:44:42 addons-20210310004204-1084876 kubelet[2310]: W0310 00:44:42.181482    2310 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/registry-test through plugin: invalid network status for
	* Mar 10 00:44:42 addons-20210310004204-1084876 kubelet[2310]: I0310 00:44:42.939414    2310 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-b5b99d04-5b8d-40b6-9d4a-6c4815dcf8c9" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b") pod "task-pv-pod" (UID: "1a5b6d85-926f-4839-acce-3bc95a554414")
	* Mar 10 00:44:42 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:42.939670    2310 secret.go:195] Couldn't get secret kube-system/tiller-token-gpgsr: secret "tiller-token-gpgsr" not found
	* Mar 10 00:44:42 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:42.939672    2310 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b podName: nodeName:}" failed. No retries permitted until 2021-03-10 00:44:44.939612977 +0000 UTC m=+130.625908337 (durationBeforeRetry 2s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-b5b99d04-5b8d-40b6-9d4a-6c4815dcf8c9\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^cc874cc0-8139-11eb-97b4-0242ac11000b\") pod \"task-pv-pod\" (UID: \"1a5b6d85-926f-4839-acce-3bc95a554414\") "
	* Mar 10 00:44:42 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:42.939786    2310 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/1c948044-7876-4077-a225-b7964c8e9b4e-tiller-token-gpgsr podName:1c948044-7876-4077-a225-b7964c8e9b4e nodeName:}" failed. No retries permitted until 2021-03-10 00:44:44.93976535 +0000 UTC m=+130.626060634 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"tiller-token-gpgsr\" (UniqueName: \"kubernetes.io/secret/1c948044-7876-4077-a225-b7964c8e9b4e-tiller-token-gpgsr\") pod \"tiller-deploy-7c86b7fbdf-k4qzh\" (UID: \"1c948044-7876-4077-a225-b7964c8e9b4e\") : secret \"tiller-token-gpgsr\" not found"
	* Mar 10 00:44:43 addons-20210310004204-1084876 kubelet[2310]: I0310 00:44:43.206865    2310 scope.go:95] [topologymanager] RemoveContainer - Container ID: 38aa3b961ad7131eb812e2a79ccbb0f0ff1a1c8507854b8a7cdfd27b99f9a892
	* Mar 10 00:44:43 addons-20210310004204-1084876 kubelet[2310]: I0310 00:44:43.222843    2310 scope.go:95] [topologymanager] RemoveContainer - Container ID: 38aa3b961ad7131eb812e2a79ccbb0f0ff1a1c8507854b8a7cdfd27b99f9a892
	* Mar 10 00:44:43 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:43.223676    2310 remote_runtime.go:332] ContainerStatus "38aa3b961ad7131eb812e2a79ccbb0f0ff1a1c8507854b8a7cdfd27b99f9a892" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 38aa3b961ad7131eb812e2a79ccbb0f0ff1a1c8507854b8a7cdfd27b99f9a892
	* Mar 10 00:44:43 addons-20210310004204-1084876 kubelet[2310]: W0310 00:44:43.223738    2310 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker 38aa3b961ad7131eb812e2a79ccbb0f0ff1a1c8507854b8a7cdfd27b99f9a892}): failed to get container status "38aa3b961ad7131eb812e2a79ccbb0f0ff1a1c8507854b8a7cdfd27b99f9a892": rpc error: code = Unknown desc = Error: No such container: 38aa3b961ad7131eb812e2a79ccbb0f0ff1a1c8507854b8a7cdfd27b99f9a892
	* Mar 10 00:44:43 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:43.288878    2310 kuberuntime_container.go:662] killContainer "tiller"(id={"docker" "38aa3b961ad7131eb812e2a79ccbb0f0ff1a1c8507854b8a7cdfd27b99f9a892"}) for pod "<nil>" failed: rpc error: code = Unknown desc = Error: No such container: 38aa3b961ad7131eb812e2a79ccbb0f0ff1a1c8507854b8a7cdfd27b99f9a892
	* Mar 10 00:44:43 addons-20210310004204-1084876 kubelet[2310]: E0310 00:44:43.290603    2310 kubelet_pods.go:1256] Failed killing the pod "tiller-deploy-7c86b7fbdf-k4qzh": failed to "KillContainer" for "tiller" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: 38aa3b961ad7131eb812e2a79ccbb0f0ff1a1c8507854b8a7cdfd27b99f9a892"
	* Mar 10 00:44:43 addons-20210310004204-1084876 kubelet[2310]: I0310 00:44:43.341092    2310 reconciler.go:196] operationExecutor.UnmountVolume started for volume "tiller-token-gpgsr" (UniqueName: "kubernetes.io/secret/1c948044-7876-4077-a225-b7964c8e9b4e-tiller-token-gpgsr") pod "1c948044-7876-4077-a225-b7964c8e9b4e" (UID: "1c948044-7876-4077-a225-b7964c8e9b4e")
	* Mar 10 00:44:43 addons-20210310004204-1084876 kubelet[2310]: I0310 00:44:43.361171    2310 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c948044-7876-4077-a225-b7964c8e9b4e-tiller-token-gpgsr" (OuterVolumeSpecName: "tiller-token-gpgsr") pod "1c948044-7876-4077-a225-b7964c8e9b4e" (UID: "1c948044-7876-4077-a225-b7964c8e9b4e"). InnerVolumeSpecName "tiller-token-gpgsr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	* Mar 10 00:44:43 addons-20210310004204-1084876 kubelet[2310]: I0310 00:44:43.441709    2310 reconciler.go:319] Volume detached for volume "tiller-token-gpgsr" (UniqueName: "kubernetes.io/secret/1c948044-7876-4077-a225-b7964c8e9b4e-tiller-token-gpgsr") on node "addons-20210310004204-1084876" DevicePath ""
	* 
	* ==> storage-provisioner [e4d14e816543] <==
	* I0310 00:43:01.342140       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 00:43:01.949172       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 00:43:01.949234       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 00:43:02.137707       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 00:43:02.138633       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210310004204-1084876_771504a9-93bb-4ded-8aa4-7c86f4ec4a2f!
	* I0310 00:43:02.138685       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bdb5b910-f2c4-47b4-96c4-a42a195bb905", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210310004204-1084876_771504a9-93bb-4ded-8aa4-7c86f4ec4a2f became leader
	* I0310 00:43:02.238874       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_addons-20210310004204-1084876_771504a9-93bb-4ded-8aa4-7c86f4ec4a2f!
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                  | pause-20210309205558-6591              | jenkins | v1.18.1 | Wed, 10 Mar 2021 00:41:50 UTC | Wed, 10 Mar 2021 00:41:52 UTC |
	| delete  | -p                                     | download-only-20210310004129-1084876   | jenkins | v1.18.1 | Wed, 10 Mar 2021 00:41:52 UTC | Wed, 10 Mar 2021 00:41:53 UTC |
	|         | download-only-20210310004129-1084876   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-only-20210310004129-1084876   | jenkins | v1.18.1 | Wed, 10 Mar 2021 00:41:53 UTC | Wed, 10 Mar 2021 00:41:53 UTC |
	|         | download-only-20210310004129-1084876   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-docker-20210310004153-1084876 | jenkins | v1.18.1 | Wed, 10 Mar 2021 00:42:02 UTC | Wed, 10 Mar 2021 00:42:04 UTC |
	|         | download-docker-20210310004153-1084876 |                                        |         |         |                               |                               |
	| start   | -p                                     | addons-20210310004204-1084876          | jenkins | v1.18.1 | Wed, 10 Mar 2021 00:42:05 UTC | Wed, 10 Mar 2021 00:44:30 UTC |
	|         | addons-20210310004204-1084876          |                                        |         |         |                               |                               |
	|         | --wait=true --memory=4000              |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --addons=registry                      |                                        |         |         |                               |                               |
	|         | --addons=metrics-server                |                                        |         |         |                               |                               |
	|         | --addons=olm                           |                                        |         |         |                               |                               |
	|         | --addons=volumesnapshots               |                                        |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver           |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=docker             |                                        |         |         |                               |                               |
	|         | --addons=ingress                       |                                        |         |         |                               |                               |
	|         | --addons=helm-tiller                   |                                        |         |         |                               |                               |
	| -p      | addons-20210310004204-1084876          | addons-20210310004204-1084876          | jenkins | v1.18.1 | Wed, 10 Mar 2021 00:44:40 UTC | Wed, 10 Mar 2021 00:44:40 UTC |
	|         | addons disable helm-tiller             |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 00:42:05
	* Running on machine: debian-jenkins-agent-14
	* Binary: Built with gc go1.16 for linux/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 00:42:05.013791 1085855 out.go:239] Setting OutFile to fd 1 ...
	* I0310 00:42:05.014265 1085855 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 00:42:05.014284 1085855 out.go:252] Setting ErrFile to fd 2...
	* I0310 00:42:05.014291 1085855 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 00:42:05.014587 1085855 root.go:308] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/bin
	* I0310 00:42:05.015340 1085855 out.go:246] Setting JSON to false
	* I0310 00:42:05.058713 1085855 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":15886,"bootTime":1615321039,"procs":170,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	* I0310 00:42:05.058873 1085855 start.go:118] virtualization: kvm guest
	* I0310 00:42:05.062318 1085855 out.go:129] * [addons-20210310004204-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)
	* I0310 00:42:05.064594 1085855 out.go:129]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	* I0310 00:42:05.062600 1085855 notify.go:126] Checking for updates...
	* I0310 00:42:05.066565 1085855 out.go:129]   - MINIKUBE_BIN=out/minikube-linux-amd64
	* I0310 00:42:05.068441 1085855 out.go:129]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube
	* I0310 00:42:05.070504 1085855 out.go:129]   - MINIKUBE_LOCATION=10730
	* I0310 00:42:05.070753 1085855 driver.go:317] Setting default libvirt URI to qemu:///system
	* I0310 00:42:05.126444 1085855 docker.go:119] docker version: linux-19.03.15
	* I0310 00:42:05.126536 1085855 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 00:42:05.221974 1085855 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:48 SystemTime:2021-03-10 00:42:05.166553283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	* I0310 00:42:05.222222 1085855 docker.go:216] overlay module found
	* I0310 00:42:05.225310 1085855 out.go:129] * Using the docker driver based on user configuration
	* I0310 00:42:05.225346 1085855 start.go:276] selected driver: docker
	* I0310 00:42:05.225353 1085855 start.go:718] validating driver "docker" against <nil>
	* I0310 00:42:05.225375 1085855 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* W0310 00:42:05.225418 1085855 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	* W0310 00:42:05.225487 1085855 out.go:191] ! Your cgroup does not allow setting memory.
	* I0310 00:42:05.227684 1085855 out.go:129]   - More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* I0310 00:42:05.228389 1085855 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 00:42:05.322113 1085855 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:48 SystemTime:2021-03-10 00:42:05.267920049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	* I0310 00:42:05.322237 1085855 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 00:42:05.322406 1085855 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 00:42:05.322432 1085855 cni.go:74] Creating CNI manager for ""
	* I0310 00:42:05.322441 1085855 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 00:42:05.322449 1085855 start_flags.go:398] config:
	* {Name:addons-20210310004204-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:addons-20210310004204-1084876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 00:42:05.325366 1085855 out.go:129] * Starting control plane node addons-20210310004204-1084876 in cluster addons-20210310004204-1084876
	* I0310 00:42:05.399135 1085855 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 00:42:05.399168 1085855 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 00:42:05.399180 1085855 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 00:42:05.399230 1085855 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 00:42:05.399244 1085855 cache.go:54] Caching tarball of preloaded images
	* I0310 00:42:05.399268 1085855 preload.go:131] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 00:42:05.399280 1085855 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 00:42:05.399621 1085855 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/config.json ...
	* I0310 00:42:05.399672 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/config.json: {Name:mk0d09bcab3a8e18b85e31abc19f73ccbb4a1fb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:05.399990 1085855 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 00:42:05.400036 1085855 start.go:313] acquiring machines lock for addons-20210310004204-1084876: {Name:mkee278759fac48bb9f8c6e069b1bd563c887a9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 00:42:05.400142 1085855 start.go:317] acquired machines lock for "addons-20210310004204-1084876" in 77.466µs
	* I0310 00:42:05.400174 1085855 start.go:89] Provisioning new machine with config: &{Name:addons-20210310004204-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:addons-20210310004204-1084876 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 00:42:05.400258 1085855 start.go:126] createHost starting for "" (driver="docker")
	* I0310 00:42:05.403497 1085855 out.go:150] * Creating docker container (CPUs=2, Memory=4000MB) ...
	* I0310 00:42:05.403752 1085855 start.go:160] libmachine.API.Create for "addons-20210310004204-1084876" (driver="docker")
	* I0310 00:42:05.403800 1085855 client.go:168] LocalClient.Create starting
	* I0310 00:42:05.403927 1085855 main.go:121] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca.pem
	* I0310 00:42:05.897611 1085855 main.go:121] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/cert.pem
	* I0310 00:42:06.327636 1085855 cli_runner.go:115] Run: docker network inspect addons-20210310004204-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 00:42:06.370252 1085855 cli_runner.go:162] docker network inspect addons-20210310004204-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 00:42:06.370342 1085855 network_create.go:240] running [docker network inspect addons-20210310004204-1084876] to gather additional debugging logs...
	* I0310 00:42:06.370366 1085855 cli_runner.go:115] Run: docker network inspect addons-20210310004204-1084876
	* W0310 00:42:06.414531 1085855 cli_runner.go:162] docker network inspect addons-20210310004204-1084876 returned with exit code 1
	* I0310 00:42:06.414572 1085855 network_create.go:243] error running [docker network inspect addons-20210310004204-1084876]: docker network inspect addons-20210310004204-1084876: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: addons-20210310004204-1084876
	* I0310 00:42:06.414588 1085855 network_create.go:245] output of [docker network inspect addons-20210310004204-1084876]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: addons-20210310004204-1084876
	* 
	* ** /stderr **
	* I0310 00:42:06.414668 1085855 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 00:42:06.458124 1085855 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 00:42:06.458219 1085855 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: addons-20210310004204-1084876 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 00:42:06.458290 1085855 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210310004204-1084876
	* I0310 00:42:06.549033 1085855 kic.go:102] calculated static IP "192.168.49.205" for the "addons-20210310004204-1084876" container
	* I0310 00:42:06.549204 1085855 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 00:42:06.592895 1085855 cli_runner.go:115] Run: docker volume create addons-20210310004204-1084876 --label name.minikube.sigs.k8s.io=addons-20210310004204-1084876 --label created_by.minikube.sigs.k8s.io=true
	* I0310 00:42:06.637479 1085855 oci.go:102] Successfully created a docker volume addons-20210310004204-1084876
	* I0310 00:42:06.637608 1085855 cli_runner.go:115] Run: docker run --rm --name addons-20210310004204-1084876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210310004204-1084876 --entrypoint /usr/bin/test -v addons-20210310004204-1084876:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 00:42:07.445307 1085855 oci.go:106] Successfully prepared a docker volume addons-20210310004204-1084876
	* W0310 00:42:07.445371 1085855 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	* W0310 00:42:07.445381 1085855 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	* I0310 00:42:07.445400 1085855 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 00:42:07.445461 1085855 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 00:42:07.445475 1085855 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 00:42:07.445474 1085855 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 00:42:07.445545 1085855 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210310004204-1084876:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* I0310 00:42:07.541396 1085855 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210310004204-1084876 --name addons-20210310004204-1084876 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210310004204-1084876 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210310004204-1084876 --network addons-20210310004204-1084876 --ip 192.168.49.205 --volume addons-20210310004204-1084876:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 00:42:08.147018 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:08.198198 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:08.251434 1085855 cli_runner.go:115] Run: docker exec addons-20210310004204-1084876 stat /var/lib/dpkg/alternatives/iptables
	* I0310 00:42:08.395272 1085855 oci.go:278] the created container "addons-20210310004204-1084876" has a running status.
	* I0310 00:42:08.395318 1085855 kic.go:206] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa...
	* I0310 00:42:08.470974 1085855 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 00:42:08.915382 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:08.963906 1085855 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	* I0310 00:42:08.963937 1085855 kic_runner.go:115] Args: [docker exec --privileged addons-20210310004204-1084876 chown docker:docker /home/docker/.ssh/authorized_keys]
	* I0310 00:42:12.124629 1085855 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210310004204-1084876:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: (4.679031292s)
	* I0310 00:42:12.124667 1085855 kic.go:184] duration metric: took 4.679191 seconds to extract preloaded images to volume
	* I0310 00:42:12.124749 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:12.168418 1085855 machine.go:88] provisioning docker machine ...
	* I0310 00:42:12.168492 1085855 ubuntu.go:169] provisioning hostname "addons-20210310004204-1084876"
	* I0310 00:42:12.168565 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:12.213183 1085855 main.go:121] libmachine: Using SSH client type: native
	* I0310 00:42:12.213443 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	* I0310 00:42:12.213463 1085855 main.go:121] libmachine: About to run SSH command:
	* sudo hostname addons-20210310004204-1084876 && echo "addons-20210310004204-1084876" | sudo tee /etc/hostname
	* I0310 00:42:12.340040 1085855 main.go:121] libmachine: SSH cmd err, output: <nil>: addons-20210310004204-1084876
	* 
	* I0310 00:42:12.340113 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:12.384729 1085855 main.go:121] libmachine: Using SSH client type: native
	* I0310 00:42:12.384926 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	* I0310 00:42:12.384964 1085855 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\saddons-20210310004204-1084876' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210310004204-1084876/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 addons-20210310004204-1084876' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 00:42:12.505474 1085855 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 00:42:12.505520 1085855 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs
/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube}
	* I0310 00:42:12.505552 1085855 ubuntu.go:177] setting up certificates
	* I0310 00:42:12.505566 1085855 provision.go:83] configureAuth start
	* I0310 00:42:12.505637 1085855 cli_runner.go:115] Run: docker container inspect -f "" addons-20210310004204-1084876
	* I0310 00:42:12.554321 1085855 provision.go:137] copyHostCerts
	* I0310 00:42:12.554407 1085855 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cert.pem (1123 bytes)
	* I0310 00:42:12.554515 1085855 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/key.pem (1679 bytes)
	* I0310 00:42:12.554607 1085855 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/ca.pem (1078 bytes)
	* I0310 00:42:12.554662 1085855 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca-key.pem org=jenkins.addons-20210310004204-1084876 san=[192.168.49.205 127.0.0.1 localhost 127.0.0.1 minikube addons-20210310004204-1084876]
	* I0310 00:42:12.935464 1085855 provision.go:165] copyRemoteCerts
	* I0310 00:42:12.935546 1085855 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 00:42:12.935588 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:12.981174 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:13.085748 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 00:42:13.106882 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	* I0310 00:42:13.127990 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	* I0310 00:42:13.149033 1085855 provision.go:86] duration metric: configureAuth took 643.440325ms
	* I0310 00:42:13.149092 1085855 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 00:42:13.149364 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:13.194720 1085855 main.go:121] libmachine: Using SSH client type: native
	* I0310 00:42:13.194907 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	* I0310 00:42:13.194922 1085855 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 00:42:13.310231 1085855 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 00:42:13.310272 1085855 ubuntu.go:71] root file system type: overlay
	* I0310 00:42:13.310515 1085855 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 00:42:13.310600 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:13.355670 1085855 main.go:121] libmachine: Using SSH client type: native
	* I0310 00:42:13.355853 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	* I0310 00:42:13.355940 1085855 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 00:42:13.480098 1085855 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 00:42:13.480216 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:13.524829 1085855 main.go:121] libmachine: Using SSH client type: native
	* I0310 00:42:13.525007 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	* I0310 00:42:13.525030 1085855 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 00:42:14.243676 1085855 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 00:42:13.473915048 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 00:42:14.243728 1085855 machine.go:91] provisioned docker machine in 2.075285314s
	* I0310 00:42:14.243739 1085855 client.go:171] LocalClient.Create took 8.839928666s
	* I0310 00:42:14.243757 1085855 start.go:168] duration metric: libmachine.API.Create for "addons-20210310004204-1084876" took 8.840005452s
	* I0310 00:42:14.243770 1085855 start.go:267] post-start starting for "addons-20210310004204-1084876" (driver="docker")
	* I0310 00:42:14.243778 1085855 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 00:42:14.243844 1085855 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 00:42:14.243884 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:14.286998 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:14.373976 1085855 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 00:42:14.377738 1085855 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 00:42:14.377764 1085855 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 00:42:14.377776 1085855 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 00:42:14.377784 1085855 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 00:42:14.377796 1085855 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/addons for local assets ...
	* I0310 00:42:14.377861 1085855 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/files for local assets ...
	* I0310 00:42:14.377891 1085855 start.go:270] post-start completed in 134.112809ms
	* I0310 00:42:14.378259 1085855 cli_runner.go:115] Run: docker container inspect -f "" addons-20210310004204-1084876
	* I0310 00:42:14.421817 1085855 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/config.json ...
	* I0310 00:42:14.422086 1085855 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 00:42:14.422143 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:14.467382 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:14.552269 1085855 start.go:129] duration metric: createHost completed in 9.15199428s
	* I0310 00:42:14.552304 1085855 start.go:80] releasing machines lock for "addons-20210310004204-1084876", held for 9.152144139s
	* I0310 00:42:14.552421 1085855 cli_runner.go:115] Run: docker container inspect -f "" addons-20210310004204-1084876
	* I0310 00:42:14.595765 1085855 ssh_runner.go:149] Run: systemctl --version
	* I0310 00:42:14.595801 1085855 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 00:42:14.595826 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:14.595877 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:14.650328 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:14.650454 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:14.759843 1085855 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 00:42:14.771315 1085855 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 00:42:14.782566 1085855 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 00:42:14.782652 1085855 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 00:42:14.793625 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 00:42:14.808531 1085855 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 00:42:14.819064 1085855 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 00:42:14.893142 1085855 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 00:42:14.904897 1085855 ssh_runner.go:149] Run: docker version --format 
	* I0310 00:42:14.962914 1085855 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 00:42:14.963015 1085855 cli_runner.go:115] Run: docker network inspect addons-20210310004204-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 00:42:15.005871 1085855 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	* I0310 00:42:15.010210 1085855 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 00:42:15.021819 1085855 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 00:42:15.021868 1085855 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 00:42:15.021911 1085855 ssh_runner.go:149] Run: docker images --format :
	* I0310 00:42:15.070901 1085855 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 00:42:15.070933 1085855 docker.go:360] Images already preloaded, skipping extraction
	* I0310 00:42:15.070984 1085855 ssh_runner.go:149] Run: docker images --format :
	* I0310 00:42:15.118086 1085855 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 00:42:15.118117 1085855 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 00:42:15.118172 1085855 ssh_runner.go:149] Run: docker info --format 
	* I0310 00:42:15.217860 1085855 cni.go:74] Creating CNI manager for ""
	* I0310 00:42:15.217885 1085855 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 00:42:15.217896 1085855 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 00:42:15.217911 1085855 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.205 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210310004204-1084876 NodeName:addons-20210310004204-1084876 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.205 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 00:42:15.218052 1085855 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 192.168.49.205
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "addons-20210310004204-1084876"
	*   kubeletExtraArgs:
	*     node-ip: 192.168.49.205
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "192.168.49.205"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 00:42:15.218148 1085855 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20210310004204-1084876 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.205
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:addons-20210310004204-1084876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	* I0310 00:42:15.218205 1085855 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 00:42:15.227166 1085855 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 00:42:15.227253 1085855 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 00:42:15.235992 1085855 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	* I0310 00:42:15.252079 1085855 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 00:42:15.267534 1085855 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1867 bytes)
	* I0310 00:42:15.282743 1085855 ssh_runner.go:149] Run: grep 192.168.49.205	control-plane.minikube.internal$ /etc/hosts
	* I0310 00:42:15.286443 1085855 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 00:42:15.297404 1085855 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876 for IP: 192.168.49.205
	* I0310 00:42:15.297468 1085855 certs.go:175] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/ca.key
	* I0310 00:42:15.494638 1085855 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/ca.crt ...
	* I0310 00:42:15.494678 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/ca.crt: {Name:mka1a851994bce625e7814c9cceb7a4e44773882 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:15.494904 1085855 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/ca.key ...
	* I0310 00:42:15.494920 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/ca.key: {Name:mk9752d5c0f6c5cc545c66adf88a40163200bb6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:15.495052 1085855 certs.go:175] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/proxy-client-ca.key
	* I0310 00:42:15.670619 1085855 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/proxy-client-ca.crt ...
	* I0310 00:42:15.670662 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/proxy-client-ca.crt: {Name:mk09f967a1b4d21777a2a1eb1da0299bb1cc27ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:15.670923 1085855 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/proxy-client-ca.key ...
	* I0310 00:42:15.670946 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/proxy-client-ca.key: {Name:mkacfb83772ed24e4fb74397cdcd6fa62d4b658d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:15.671133 1085855 certs.go:279] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/client.key
	* I0310 00:42:15.671152 1085855 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/client.crt with IP's: []
	* I0310 00:42:16.501554 1085855 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/client.crt ...
	* I0310 00:42:16.501602 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/client.crt: {Name:mkb9f2993ff815dd0ef3a67fa770c723e9bb3c1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:16.501848 1085855 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/client.key ...
	* I0310 00:42:16.501868 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/client.key: {Name:mk3cbb240536d99c1b0b51188e54e69a5ccba8d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:16.501972 1085855 certs.go:279] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.key.756a3cc5
	* I0310 00:42:16.501984 1085855 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.crt.756a3cc5 with IP's: [192.168.49.205 10.96.0.1 127.0.0.1 10.0.0.1]
	* I0310 00:42:17.129782 1085855 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.crt.756a3cc5 ...
	* I0310 00:42:17.129831 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.crt.756a3cc5: {Name:mkeafb2cc449bbc30876a42e2b0a0e41a6f91ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:17.130061 1085855 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.key.756a3cc5 ...
	* I0310 00:42:17.130080 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.key.756a3cc5: {Name:mk212325427d4440bbf9ce86b88ea5d267c0a902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:17.130190 1085855 certs.go:290] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.crt.756a3cc5 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.crt
	* I0310 00:42:17.130273 1085855 certs.go:294] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.key.756a3cc5 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.key
	* I0310 00:42:17.130334 1085855 certs.go:279] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/proxy-client.key
	* I0310 00:42:17.130345 1085855 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/proxy-client.crt with IP's: []
	* I0310 00:42:17.492141 1085855 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/proxy-client.crt ...
	* I0310 00:42:17.492183 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/proxy-client.crt: {Name:mk5ba1eea4a3a68270a31866b1f703374a896461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:17.492446 1085855 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/proxy-client.key ...
	* I0310 00:42:17.496061 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/proxy-client.key: {Name:mkc42a2aa245eb85bd9c2fcd35d62e20f429dd12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:17.496343 1085855 certs.go:354] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca-key.pem (1679 bytes)
	* I0310 00:42:17.496397 1085855 certs.go:354] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca.pem (1078 bytes)
	* I0310 00:42:17.496430 1085855 certs.go:354] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/cert.pem (1123 bytes)
	* I0310 00:42:17.496476 1085855 certs.go:354] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/key.pem (1679 bytes)
	* I0310 00:42:17.497605 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 00:42:17.533725 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	* I0310 00:42:17.555789 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 00:42:17.577221 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/addons-20210310004204-1084876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	* I0310 00:42:17.599024 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 00:42:17.620277 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 00:42:17.642522 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 00:42:17.664281 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 00:42:17.686466 1085855 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 00:42:17.708330 1085855 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 00:42:17.724179 1085855 ssh_runner.go:149] Run: openssl version
	* I0310 00:42:17.730607 1085855 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 00:42:17.739976 1085855 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 00:42:17.744072 1085855 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Mar 10 00:42 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 00:42:17.744131 1085855 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 00:42:17.750145 1085855 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 00:42:17.759465 1085855 kubeadm.go:385] StartCluster: {Name:addons-20210310004204-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:addons-20210310004204-1084876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.205 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 00:42:17.759613 1085855 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 00:42:17.804764 1085855 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 00:42:17.814146 1085855 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 00:42:17.822838 1085855 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 00:42:17.822928 1085855 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 00:42:17.831650 1085855 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 00:42:17.831702 1085855 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 00:42:34.581508 1085855 out.go:150]   - Generating certificates and keys ...
	* I0310 00:42:34.585415 1085855 out.go:150]   - Booting up control plane ...
	* I0310 00:42:34.589349 1085855 out.go:150]   - Configuring RBAC rules ...
	* I0310 00:42:34.592331 1085855 cni.go:74] Creating CNI manager for ""
	* I0310 00:42:34.592356 1085855 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 00:42:34.592391 1085855 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	* I0310 00:42:34.592494 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:34.592532 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=8d9e062aa56d18f701a92d5344bd63e9d7a0bc2e minikube.k8s.io/name=addons-20210310004204-1084876 minikube.k8s.io/updated_at=2021_03_10T00_42_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:34.784717 1085855 ops.go:34] apiserver oom_adj: -16
	* I0310 00:42:34.784869 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:35.843694 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:36.344034 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:36.843683 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:37.343382 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:37.843883 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:38.343709 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:38.843390 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:39.343934 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:39.843535 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:40.343213 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:40.844074 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:41.343696 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:41.843120 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:42.344034 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:43.698533 1085855 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.354448474s)
	* I0310 00:42:43.843854 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:46.472117 1085855 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.6282101s)
	* I0310 00:42:46.843537 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:47.343297 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:47.843692 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:48.343496 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:48.843368 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:49.343910 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:49.843107 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:50.343542 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:50.843106 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:51.344067 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:51.843941 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:52.343912 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:52.843230 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:53.343902 1085855 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 00:42:53.547966 1085855 kubeadm.go:995] duration metric: took 18.955564448s to wait for elevateKubeSystemPrivileges.
	* I0310 00:42:53.547999 1085855 kubeadm.go:387] StartCluster complete in 35.788544091s
	* I0310 00:42:53.548024 1085855 settings.go:142] acquiring lock: {Name:mk161ca7a313dd1f7452460ca17b58a620e51ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:53.548187 1085855 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	* I0310 00:42:53.548778 1085855 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig: {Name:mka4327859fb2faaf3a9844649da027e1c57d129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 00:42:53.568119 1085855 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "addons-20210310004204-1084876" rescaled to 1
	* I0310 00:42:53.568208 1085855 start.go:206] Will wait 6m0s for node up to 
	* I0310 00:42:53.571838 1085855 out.go:129] * Verifying Kubernetes components...
	* I0310 00:42:53.571908 1085855 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	* I0310 00:42:53.568348 1085855 addons.go:312] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	* I0310 00:42:53.572079 1085855 addons.go:55] Setting volumesnapshots=true in profile "addons-20210310004204-1084876"
	* I0310 00:42:53.572104 1085855 addons.go:55] Setting olm=true in profile "addons-20210310004204-1084876"
	* I0310 00:42:53.572123 1085855 addons.go:55] Setting default-storageclass=true in profile "addons-20210310004204-1084876"
	* I0310 00:42:53.572093 1085855 addons.go:55] Setting helm-tiller=true in profile "addons-20210310004204-1084876"
	* I0310 00:42:53.572142 1085855 addons.go:131] Setting addon helm-tiller=true in "addons-20210310004204-1084876"
	* I0310 00:42:53.572144 1085855 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210310004204-1084876"
	* I0310 00:42:53.572165 1085855 host.go:66] Checking if "addons-20210310004204-1084876" exists ...
	* I0310 00:42:53.572179 1085855 addons.go:55] Setting registry=true in profile "addons-20210310004204-1084876"
	* I0310 00:42:53.572174 1085855 addons.go:55] Setting metrics-server=true in profile "addons-20210310004204-1084876"
	* I0310 00:42:53.572194 1085855 addons.go:131] Setting addon registry=true in "addons-20210310004204-1084876"
	* I0310 00:42:53.572208 1085855 addons.go:131] Setting addon metrics-server=true in "addons-20210310004204-1084876"
	* I0310 00:42:53.572218 1085855 host.go:66] Checking if "addons-20210310004204-1084876" exists ...
	* I0310 00:42:53.572232 1085855 host.go:66] Checking if "addons-20210310004204-1084876" exists ...
	* I0310 00:42:53.572499 1085855 addons.go:55] Setting storage-provisioner=true in profile "addons-20210310004204-1084876"
	* I0310 00:42:53.572538 1085855 addons.go:131] Setting addon storage-provisioner=true in "addons-20210310004204-1084876"
	* W0310 00:42:53.572548 1085855 addons.go:140] addon storage-provisioner should already be in state true
	* I0310 00:42:53.572565 1085855 host.go:66] Checking if "addons-20210310004204-1084876" exists ...
	* I0310 00:42:53.572697 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:53.572879 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:53.572913 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:53.572111 1085855 addons.go:55] Setting csi-hostpath-driver=true in profile "addons-20210310004204-1084876"
	* I0310 00:42:53.572124 1085855 addons.go:131] Setting addon volumesnapshots=true in "addons-20210310004204-1084876"
	* I0310 00:42:53.573049 1085855 host.go:66] Checking if "addons-20210310004204-1084876" exists ...
	* I0310 00:42:53.572916 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:53.573178 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:53.572164 1085855 addons.go:131] Setting addon olm=true in "addons-20210310004204-1084876"
	* I0310 00:42:53.573217 1085855 host.go:66] Checking if "addons-20210310004204-1084876" exists ...
	* I0310 00:42:53.573049 1085855 addons.go:131] Setting addon csi-hostpath-driver=true in "addons-20210310004204-1084876"
	* I0310 00:42:53.573363 1085855 host.go:66] Checking if "addons-20210310004204-1084876" exists ...
	* I0310 00:42:53.573603 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:53.573716 1085855 addons.go:55] Setting ingress=true in profile "addons-20210310004204-1084876"
	* I0310 00:42:53.573737 1085855 addons.go:131] Setting addon ingress=true in "addons-20210310004204-1084876"
	* I0310 00:42:53.573771 1085855 host.go:66] Checking if "addons-20210310004204-1084876" exists ...
	* I0310 00:42:53.573912 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:53.574022 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:53.574366 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:53.602246 1085855 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	* I0310 00:42:53.602281 1085855 pod_ready.go:59] waiting 6m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	* I0310 00:42:53.668605 1085855 out.go:129]   - Using image registry:2.7.1
	* I0310 00:42:53.670821 1085855 out.go:129]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	* I0310 00:42:53.670945 1085855 addons.go:249] installing /etc/kubernetes/addons/registry-rc.yaml
	* I0310 00:42:53.670976 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	* I0310 00:42:53.671054 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:53.687379 1085855 out.go:129]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	* I0310 00:42:53.687535 1085855 addons.go:249] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	* I0310 00:42:53.687565 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	* I0310 00:42:53.687634 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:53.692017 1085855 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 00:42:53.692151 1085855 addons.go:249] installing /etc/kubernetes/addons/storage-provisioner.yaml
	* I0310 00:42:53.692165 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	* I0310 00:42:53.692233 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:53.697913 1085855 out.go:129]   - Using image k8s.gcr.io/metrics-server-amd64:v0.2.1
	* I0310 00:42:53.698003 1085855 addons.go:249] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	* I0310 00:42:53.698016 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (401 bytes)
	* I0310 00:42:53.698082 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:53.698437 1085855 addons.go:131] Setting addon default-storageclass=true in "addons-20210310004204-1084876"
	* W0310 00:42:53.698464 1085855 addons.go:140] addon default-storageclass should already be in state true
	* I0310 00:42:53.698485 1085855 host.go:66] Checking if "addons-20210310004204-1084876" exists ...
	* I0310 00:42:53.699112 1085855 cli_runner.go:115] Run: docker container inspect addons-20210310004204-1084876 --format=
	* I0310 00:42:53.702413 1085855 out.go:129]   - Using image quay.io/k8scsi/csi-attacher:v3.0.0-rc1
	* I0310 00:42:53.704994 1085855 out.go:129]   - Using image quay.io/k8scsi/csi-node-driver-registrar:v1.3.0
	* I0310 00:42:53.710891 1085855 out.go:129]   - Using image quay.io/k8scsi/hostpathplugin:v1.4.0-rc2
	* I0310 00:42:53.715686 1085855 out.go:129]   - Using image quay.io/k8scsi/livenessprobe:v1.1.0
	* I0310 00:42:53.715778 1085855 out.go:129]   - Using image quay.io/operator-framework/olm:0.14.1
	* I0310 00:42:53.718315 1085855 out.go:129]   - Using image quay.io/k8scsi/csi-resizer:v0.6.0-rc1
	* I0310 00:42:53.720681 1085855 out.go:129]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	* I0310 00:42:53.715739 1085855 out.go:129]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	* I0310 00:42:53.720839 1085855 addons.go:249] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	* I0310 00:42:53.720855 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	* I0310 00:42:53.720935 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:53.723549 1085855 out.go:129]   - Using image quay.io/k8scsi/csi-snapshotter:v2.1.0
	* I0310 00:42:53.725606 1085855 out.go:129]   - Using image jettech/kube-webhook-certgen:v1.2.2
	* I0310 00:42:53.727946 1085855 out.go:129]   - Using image gcr.io/k8s-staging-sig-storage/csi-provisioner:v2.0.0-rc2
	* I0310 00:42:53.728044 1085855 addons.go:249] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	* I0310 00:42:53.728065 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	* I0310 00:42:53.727887 1085855 out.go:129]   - Using image jettech/kube-webhook-certgen:v1.3.0
	* I0310 00:42:53.728155 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:53.730344 1085855 out.go:129]   - Using image us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.40.2
	* I0310 00:42:53.730447 1085855 addons.go:249] installing /etc/kubernetes/addons/ingress-configmap.yaml
	* I0310 00:42:53.730463 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1251 bytes)
	* I0310 00:42:53.730523 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:53.758670 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:53.774826 1085855 addons.go:249] installing /etc/kubernetes/addons/crds.yaml
	* I0310 00:42:53.774866 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (751764 bytes)
	* I0310 00:42:53.774957 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:53.788056 1085855 addons.go:249] installing /etc/kubernetes/addons/storageclass.yaml
	* I0310 00:42:53.788085 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	* I0310 00:42:53.788160 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	* I0310 00:42:53.790918 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:53.792391 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:53.799587 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:53.821995 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:53.837063 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:53.839591 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:53.871891 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:53.876823 1085855 addons.go:249] installing /etc/kubernetes/addons/registry-svc.yaml
	* I0310 00:42:53.876876 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	* I0310 00:42:53.877319 1085855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/addons-20210310004204-1084876/id_rsa Username:docker}
	* I0310 00:42:53.944209 1085855 addons.go:249] installing /etc/kubernetes/addons/registry-proxy.yaml
	* I0310 00:42:53.944243 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	* I0310 00:42:53.955702 1085855 addons.go:249] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	* I0310 00:42:53.955732 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	* I0310 00:42:53.955748 1085855 addons.go:249] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	* I0310 00:42:53.955767 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	* I0310 00:42:53.957804 1085855 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	* I0310 00:42:53.964349 1085855 addons.go:249] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	* I0310 00:42:53.964406 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (705 bytes)
	* I0310 00:42:54.035875 1085855 addons.go:249] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	* I0310 00:42:54.035911 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	* I0310 00:42:54.042519 1085855 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	* I0310 00:42:54.051978 1085855 addons.go:249] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	* I0310 00:42:54.052041 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	* I0310 00:42:54.057463 1085855 addons.go:249] installing /etc/kubernetes/addons/ingress-rbac.yaml
	* I0310 00:42:54.057495 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (4828 bytes)
	* I0310 00:42:54.057632 1085855 addons.go:249] installing /etc/kubernetes/addons/metrics-server-service.yaml
	* I0310 00:42:54.057650 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (401 bytes)
	* I0310 00:42:54.134772 1085855 addons.go:249] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	* I0310 00:42:54.134809 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	* I0310 00:42:54.147556 1085855 addons.go:249] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	* I0310 00:42:54.147588 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	* I0310 00:42:54.147871 1085855 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	* I0310 00:42:54.156570 1085855 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	* I0310 00:42:54.241569 1085855 addons.go:249] installing /etc/kubernetes/addons/ingress-dp.yaml
	* I0310 00:42:54.241607 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (8749 bytes)
	* I0310 00:42:54.256714 1085855 addons.go:249] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	* I0310 00:42:54.256760 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	* I0310 00:42:54.256721 1085855 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	* I0310 00:42:54.342408 1085855 addons.go:249] installing /etc/kubernetes/addons/olm.yaml
	* I0310 00:42:54.342443 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9508 bytes)
	* I0310 00:42:54.356567 1085855 addons.go:249] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	* I0310 00:42:54.356604 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	* I0310 00:42:54.452547 1085855 addons.go:249] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	* I0310 00:42:54.452579 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2319 bytes)
	* I0310 00:42:54.543713 1085855 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	* I0310 00:42:54.549441 1085855 addons.go:249] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	* I0310 00:42:54.549524 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (999 bytes)
	* I0310 00:42:54.551105 1085855 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	* I0310 00:42:54.649127 1085855 addons.go:249] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	* I0310 00:42:54.649163 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (977 bytes)
	* I0310 00:42:54.743959 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:53 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 00:42:54.838554 1085855 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	* I0310 00:42:54.847798 1085855 addons.go:249] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	* I0310 00:42:54.847837 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (5283 bytes)
	* I0310 00:42:55.242695 1085855 addons.go:249] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	* I0310 00:42:55.242729 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2412 bytes)
	* I0310 00:42:55.446359 1085855 addons.go:249] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	* I0310 00:42:55.446390 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2310 bytes)
	* I0310 00:42:55.847107 1085855 addons.go:249] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	* I0310 00:42:55.847138 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2343 bytes)
	* I0310 00:42:56.034613 1085855 addons.go:249] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	* I0310 00:42:56.034656 1085855 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (789 bytes)
	* I0310 00:42:56.137541 1085855 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.179648384s)
	* I0310 00:42:56.235008 1085855 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	* I0310 00:42:56.449439 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.205 PodIP: PodIPs:[] StartTime:2021-03-10 00:42:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{W
aiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0004a3df7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 00:42:56.934994 1085855 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.892409857s)
	* I0310 00:42:56.935048 1085855 addons.go:283] Verifying addon registry=true in "addons-20210310004204-1084876"
	* I0310 00:42:56.937942 1085855 out.go:129] * Verifying registry addon...
	* I0310 00:42:56.940882 1085855 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	* I0310 00:42:56.957246 1085855 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	* I0310 00:42:56.957277 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:42:57.134484 1085855 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.977860826s)
	* I0310 00:42:57.134581 1085855 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.877757162s)
	* I0310 00:42:57.134678 1085855 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (2.986772191s)
	* I0310 00:42:57.540192 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:42:57.847559 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.205 PodIP: PodIPs:[] StartTime:2021-03-10 00:42:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{W
aiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00071d407}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 00:42:58.057637 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:42:58.636873 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:42:59.037865 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:42:59.253935 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.205 PodIP: PodIPs:[] StartTime:2021-03-10 00:42:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{W
aiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000b44357}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 00:42:59.259325 1085855 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (4.715557587s)
	* I0310 00:42:59.259360 1085855 addons.go:283] Verifying addon ingress=true in "addons-20210310004204-1084876"
	* I0310 00:42:59.335173 1085855 out.go:129] * Verifying ingress addon...
	* I0310 00:42:59.338057 1085855 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "kube-system" ...
	* I0310 00:42:59.349356 1085855 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	* I0310 00:42:59.349447 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:42:59.537841 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:42:59.935725 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:00.045502 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:00.262301 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.205 PodIP: PodIPs:[] StartTime:2021-03-10 00:42:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{W
aiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001570a37}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 00:43:00.449282 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:00.547331 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:00.945891 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:01.043021 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:01.452889 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:01.552353 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:01.838560 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.205 PodIP: PodIPs:[] StartTime:2021-03-10 00:42:55 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{W
aiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012951c7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 00:43:01.938635 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:02.055921 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:02.357730 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:02.636602 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:02.854492 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	* I0310 00:43:02.953525 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:03.051328 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:03.338994 1085855 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.500379605s)
	* I0310 00:43:03.339100 1085855 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (8.78795348s)
	* W0310 00:43:03.339152 1085855 addons.go:270] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	* stdout:
	* customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	* customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	* customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	* customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	* customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	* namespace/olm created
	* namespace/operators created
	* serviceaccount/olm-operator-serviceaccount created
	* clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	* clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	* deployment.apps/olm-operator created
	* deployment.apps/catalog-operator created
	* clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	* clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	* 
	* stderr:
	* Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
	* unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	* unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	* unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	* unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	* I0310 00:43:03.339175 1085855 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	* stdout:
	* customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	* customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	* customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	* customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	* customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	* namespace/olm created
	* namespace/operators created
	* serviceaccount/olm-operator-serviceaccount created
	* clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	* clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	* deployment.apps/olm-operator created
	* deployment.apps/catalog-operator created
	* clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	* clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	* 
	* stderr:
	* Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
	* unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	* unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	* unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	* unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	* I0310 00:43:03.353007 1085855 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.117936106s)
	* I0310 00:43:03.353055 1085855 addons.go:283] Verifying addon csi-hostpath-driver=true in "addons-20210310004204-1084876"
	* I0310 00:43:03.355965 1085855 out.go:129] * Verifying csi-hostpath-driver addon...
	* I0310 00:43:03.358704 1085855 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	* I0310 00:43:03.448135 1085855 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	* I0310 00:43:03.448166 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:03.448311 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:03.539025 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:03.616414 1085855 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	* I0310 00:43:03.860081 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:03.959424 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:04.056774 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:04.338916 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	* I0310 00:43:04.356548 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:04.455835 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:04.544231 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:04.856763 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:04.957228 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:05.047290 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:05.356105 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:05.535344 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:05.539145 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:05.745545 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	* I0310 00:43:05.856226 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:05.957037 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:06.036393 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:06.449953 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:06.541646 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:06.552689 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:06.750252 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	* I0310 00:43:06.861503 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:06.955989 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:07.038426 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:07.356152 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:07.455533 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:07.462407 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:07.854875 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:07.958373 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:07.961703 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:08.244121 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	* I0310 00:43:08.355836 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:08.456494 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:08.538235 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:08.854868 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:08.955602 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:08.963465 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:09.150866 1085855 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.53436101s)
	* I0310 00:43:09.356404 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:09.463526 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:09.545505 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:09.675282 1085855 pod_ready.go:102] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	* I0310 00:43:09.855672 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:09.955953 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:09.965364 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:10.355643 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:10.455747 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:10.461958 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:10.855668 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:10.958858 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:10.962250 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:11.175182 1085855 pod_ready.go:97] pod "coredns-74ff55c5b-xlj4r" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:43:11 +0000 UTC Reason: Message:}
	* I0310 00:43:11.175219 1085855 pod_ready.go:62] duration metric: took 17.572925075s to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	* I0310 00:43:11.175232 1085855 pod_ready.go:59] waiting 6m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	* I0310 00:43:11.355799 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:11.455123 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:11.462273 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:11.855052 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:11.955453 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:11.963675 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:12.241902 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:12.356514 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:12.454487 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:12.462318 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:12.855671 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:12.955475 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:12.962953 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:13.244286 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:13.355259 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:13.459064 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:13.463770 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:13.855839 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:13.955706 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:13.964238 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:14.246842 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:14.359627 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:14.455901 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:14.463617 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:14.856157 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:14.955913 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:14.962692 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:15.356703 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:15.455667 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:15.462758 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:15.694787 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:15.855970 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:15.954858 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:15.962429 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:16.355580 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:16.455192 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:16.462951 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:16.741669 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:16.858674 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:16.955922 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:16.962455 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:17.354804 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:17.454850 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:17.463033 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:17.742220 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:17.855922 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:17.954992 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:17.962707 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:18.355606 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:18.455330 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:18.461916 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:18.855081 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:18.954446 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:18.962111 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:19.194856 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:19.355341 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:19.454182 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:19.462068 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	* I0310 00:43:19.855295 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:19.955338 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:19.963999 1085855 kapi.go:108] duration metric: took 23.023115955s to wait for kubernetes.io/minikube-addons=registry ...
	* I0310 00:43:20.356762 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:20.457796 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:20.743283 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:20.854191 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:20.954529 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:21.355861 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:21.454772 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:21.888712 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:21.955572 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:22.194257 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:22.354681 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:22.453714 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:22.856404 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:22.954652 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:23.243721 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:23.356258 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:23.454224 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:23.855386 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:23.958005 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:24.354556 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:24.455271 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:24.744904 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:24.855446 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:24.954999 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:25.357722 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:25.455248 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:25.745702 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:25.856613 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:25.954672 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:26.355855 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:26.453762 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:26.854813 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:26.954144 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:27.194331 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:27.355694 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:27.455801 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:27.855211 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:27.955221 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:28.196145 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:28.356531 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:28.455467 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:28.854327 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:28.955312 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:29.244985 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:29.364132 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:29.454754 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:29.856339 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:29.954924 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:30.355784 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:30.453904 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:30.758712 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:30.855512 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:30.956179 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:31.356608 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:31.455325 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:31.855502 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:31.959409 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:32.242378 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:32.355700 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:32.455177 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:32.856599 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:32.955172 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:33.243481 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:33.354693 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:33.454743 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:33.855913 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:33.956287 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:34.243535 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:34.355136 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:34.455348 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:34.855404 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:34.955759 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:35.356210 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:35.455146 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:35.693572 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:35.854724 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:35.956051 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:36.355136 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:36.457571 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:36.694379 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:36.859214 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:36.957659 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:37.355876 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:37.458194 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:37.745471 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:37.855141 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:37.959317 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:38.436684 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:38.457530 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:38.749729 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:38.855187 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:38.954314 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:39.356445 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:39.454332 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:39.856258 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:39.954757 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:40.252047 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:40.356012 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:40.455631 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:40.856492 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:40.955466 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:41.355731 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:41.456109 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:41.745854 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:41.858529 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:41.954880 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:42.360223 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:42.459284 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:42.749663 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:42.858718 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:42.955742 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:43.358467 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:43.456609 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:43.855132 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:43.956531 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:44.194321 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:44.355685 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:44.455424 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:44.860724 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:44.956186 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:45.242771 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:45.355875 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:45.454371 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:45.855896 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:45.953768 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:46.355623 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:46.455165 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:46.695227 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:46.855693 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:46.955013 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:47.639435 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:47.644104 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:47.755445 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:47.855335 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:47.954945 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:48.356244 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:48.458061 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:48.856556 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:48.955264 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:49.245769 1085855 pod_ready.go:102] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [etcd]}
	* I0310 00:43:49.355779 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:49.456128 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:49.855369 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:49.955381 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:50.355657 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:50.456035 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:50.744281 1085855 pod_ready.go:97] pod "etcd-addons-20210310004204-1084876" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:43:50 +0000 UTC Reason: Message:}
	* I0310 00:43:50.744321 1085855 pod_ready.go:62] duration metric: took 39.569079194s to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	* I0310 00:43:50.744336 1085855 pod_ready.go:59] waiting 6m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	* I0310 00:43:50.755600 1085855 pod_ready.go:97] pod "kube-apiserver-addons-20210310004204-1084876" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:41 +0000 UTC Reason: Message:}
	* I0310 00:43:50.755628 1085855 pod_ready.go:62] duration metric: took 11.282438ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	* I0310 00:43:50.755642 1085855 pod_ready.go:59] waiting 6m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	* I0310 00:43:50.766452 1085855 pod_ready.go:97] pod "kube-controller-manager-addons-20210310004204-1084876" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:43:44 +0000 UTC Reason: Message:}
	* I0310 00:43:50.766482 1085855 pod_ready.go:62] duration metric: took 10.828849ms to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	* I0310 00:43:50.766494 1085855 pod_ready.go:59] waiting 6m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	* I0310 00:43:50.841670 1085855 pod_ready.go:97] pod "kube-proxy-dmzxd" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:56 +0000 UTC Reason: Message:}
	* I0310 00:43:50.841699 1085855 pod_ready.go:62] duration metric: took 75.194679ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	* I0310 00:43:50.841710 1085855 pod_ready.go:59] waiting 6m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	* I0310 00:43:50.859728 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:50.954238 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:51.355409 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:51.454787 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:51.856126 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:51.871728 1085855 pod_ready.go:102] pod "kube-scheduler-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-scheduler]}
	* I0310 00:43:51.955164 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:52.356138 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:52.456653 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:52.855517 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:52.954845 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:53.355253 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:53.438909 1085855 pod_ready.go:102] pod "kube-scheduler-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-scheduler]}
	* I0310 00:43:53.454033 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:53.854934 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:53.954375 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:54.354558 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:54.455472 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:54.856380 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:54.870656 1085855 pod_ready.go:102] pod "kube-scheduler-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-scheduler]}
	* I0310 00:43:54.954826 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:55.355495 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:55.455888 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:55.854949 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:55.938198 1085855 pod_ready.go:102] pod "kube-scheduler-addons-20210310004204-1084876" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:42:35 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [kube-scheduler]}
	* I0310 00:43:55.957382 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:56.355142 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:56.369039 1085855 pod_ready.go:97] pod "kube-scheduler-addons-20210310004204-1084876" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 00:43:56 +0000 UTC Reason: Message:}
	* I0310 00:43:56.369071 1085855 pod_ready.go:62] duration metric: took 5.527346149s to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	* I0310 00:43:56.369081 1085855 pod_ready.go:39] duration metric: took 1m2.766802936s for extra waiting for kube-system core pods to be Ready ...
	* I0310 00:43:56.369096 1085855 api_server.go:48] waiting for apiserver process to appear ...
	* I0310 00:43:56.369197 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format=
	* I0310 00:43:56.454412 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:56.466794 1085855 logs.go:255] 1 containers: [924189b6ebf8]
	* I0310 00:43:56.466896 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format=
	* I0310 00:43:56.552211 1085855 logs.go:255] 1 containers: [fcd904ed9876]
	* I0310 00:43:56.552287 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format=
	* I0310 00:43:56.598073 1085855 logs.go:255] 1 containers: [63e14282682c]
	* I0310 00:43:56.598181 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format=
	* I0310 00:43:56.658552 1085855 logs.go:255] 1 containers: [cb950dda7587]
	* I0310 00:43:56.658644 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 00:43:56.706024 1085855 logs.go:255] 1 containers: [0bd04dbed725]
	* I0310 00:43:56.706097 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 00:43:56.778225 1085855 logs.go:255] 0 containers: []
	* W0310 00:43:56.778266 1085855 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 00:43:56.778327 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=
	* I0310 00:43:56.855087 1085855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	* I0310 00:43:56.864128 1085855 logs.go:255] 1 containers: [e4d14e816543]
	* I0310 00:43:56.864208 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format=
	* I0310 00:43:56.936426 1085855 logs.go:255] 1 containers: [d1c1a132de64]
	* I0310 00:43:56.936510 1085855 logs.go:122] Gathering logs for kube-controller-manager [d1c1a132de64] ...
	* I0310 00:43:56.936524 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d1c1a132de64"
	* I0310 00:43:56.955159 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:57.020073 1085855 logs.go:122] Gathering logs for Docker ...
	* I0310 00:43:57.020118 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	* I0310 00:43:57.062264 1085855 logs.go:122] Gathering logs for kube-apiserver [924189b6ebf8] ...
	* I0310 00:43:57.062305 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 924189b6ebf8"
	* I0310 00:43:57.267242 1085855 logs.go:122] Gathering logs for coredns [63e14282682c] ...
	* I0310 00:43:57.267282 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 63e14282682c"
	* I0310 00:43:57.354834 1085855 logs.go:122] Gathering logs for kube-scheduler [cb950dda7587] ...
	* I0310 00:43:57.354870 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cb950dda7587"
	* I0310 00:43:57.355121 1085855 kapi.go:108] duration metric: took 58.017062623s to wait for app.kubernetes.io/name=ingress-nginx ...
	* I0310 00:43:57.455663 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:57.455760 1085855 logs.go:122] Gathering logs for kube-proxy [0bd04dbed725] ...
	* I0310 00:43:57.455786 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 0bd04dbed725"
	* I0310 00:43:57.549451 1085855 logs.go:122] Gathering logs for storage-provisioner [e4d14e816543] ...
	* I0310 00:43:57.549496 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 e4d14e816543"
	* I0310 00:43:57.740920 1085855 logs.go:122] Gathering logs for container status ...
	* I0310 00:43:57.740964 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 00:43:57.780209 1085855 logs.go:122] Gathering logs for kubelet ...
	* I0310 00:43:57.780251 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	* W0310 00:43:57.880795 1085855 logs.go:137] Found kubelet problem: Mar 10 00:42:58 addons-20210310004204-1084876 kubelet[2310]: E0310 00:42:58.540335    2310 reflector.go:138] object-"kube-system"/"snapshot-controller-token-44ggx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "snapshot-controller-token-44ggx" is forbidden: User "system:node:addons-20210310004204-1084876" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210310004204-1084876' and this object
	* W0310 00:43:57.901253 1085855 logs.go:137] Found kubelet problem: Mar 10 00:43:26 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:26.234741    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* W0310 00:43:57.901815 1085855 logs.go:137] Found kubelet problem: Mar 10 00:43:27 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:27.249691    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* I0310 00:43:57.919336 1085855 logs.go:122] Gathering logs for dmesg ...
	* I0310 00:43:57.919384 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	* I0310 00:43:57.944251 1085855 logs.go:122] Gathering logs for describe nodes ...
	* I0310 00:43:57.944294 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	* I0310 00:43:57.955600 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:58.423600 1085855 logs.go:122] Gathering logs for etcd [fcd904ed9876] ...
	* I0310 00:43:58.423636 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 fcd904ed9876"
	* I0310 00:43:58.457246 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* W0310 00:43:58.482005 1085855 out.go:191] X Problems detected in kubelet:
	* W0310 00:43:58.482065 1085855 out.go:191]   - Mar 10 00:42:58 addons-20210310004204-1084876 kubelet[2310]: E0310 00:42:58.540335    2310 reflector.go:138] object-"kube-system"/"snapshot-controller-token-44ggx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "snapshot-controller-token-44ggx" is forbidden: User "system:node:addons-20210310004204-1084876" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210310004204-1084876' and this object
	* W0310 00:43:58.482110 1085855 out.go:191]   - Mar 10 00:43:26 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:26.234741    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* W0310 00:43:58.482161 1085855 out.go:191]   - Mar 10 00:43:27 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:27.249691    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* I0310 00:43:58.955281 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:59.455081 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:43:59.955777 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:44:00.455824 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:44:00.955488 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:44:01.457623 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:44:01.956230 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:44:02.455071 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:44:02.954470 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:44:03.455472 1085855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	* I0310 00:44:03.954803 1085855 kapi.go:108] duration metric: took 1m0.596093722s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	* I0310 00:44:03.958061 1085855 out.go:129] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, helm-tiller, volumesnapshots, olm, registry, ingress, csi-hostpath-driver
	* I0310 00:44:03.958092 1085855 addons.go:314] enableAddons completed in 1m10.389758277s
	* I0310 00:44:08.484974 1085855 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 00:44:08.536512 1085855 api_server.go:68] duration metric: took 1m14.968222364s to wait for apiserver process to appear ...
	* I0310 00:44:08.536565 1085855 api_server.go:84] waiting for apiserver healthz status ...
	* I0310 00:44:08.536633 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format=
	* I0310 00:44:08.584569 1085855 logs.go:255] 1 containers: [924189b6ebf8]
	* I0310 00:44:08.584647 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format=
	* I0310 00:44:08.632762 1085855 logs.go:255] 1 containers: [fcd904ed9876]
	* I0310 00:44:08.632856 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format=
	* I0310 00:44:08.681696 1085855 logs.go:255] 1 containers: [63e14282682c]
	* I0310 00:44:08.681803 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format=
	* I0310 00:44:08.728802 1085855 logs.go:255] 1 containers: [cb950dda7587]
	* I0310 00:44:08.728901 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 00:44:08.780034 1085855 logs.go:255] 1 containers: [0bd04dbed725]
	* I0310 00:44:08.780119 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 00:44:08.840346 1085855 logs.go:255] 0 containers: []
	* W0310 00:44:08.840381 1085855 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 00:44:08.840444 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=
	* I0310 00:44:08.887442 1085855 logs.go:255] 1 containers: [e4d14e816543]
	* I0310 00:44:08.887534 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format=
	* I0310 00:44:08.939270 1085855 logs.go:255] 1 containers: [d1c1a132de64]
	* I0310 00:44:08.939308 1085855 logs.go:122] Gathering logs for etcd [fcd904ed9876] ...
	* I0310 00:44:08.939321 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 fcd904ed9876"
	* I0310 00:44:08.995000 1085855 logs.go:122] Gathering logs for storage-provisioner [e4d14e816543] ...
	* I0310 00:44:08.995034 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 e4d14e816543"
	* I0310 00:44:09.045931 1085855 logs.go:122] Gathering logs for kube-controller-manager [d1c1a132de64] ...
	* I0310 00:44:09.045963 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d1c1a132de64"
	* I0310 00:44:09.118300 1085855 logs.go:122] Gathering logs for dmesg ...
	* I0310 00:44:09.118345 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	* I0310 00:44:09.146265 1085855 logs.go:122] Gathering logs for kube-apiserver [924189b6ebf8] ...
	* I0310 00:44:09.146322 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 924189b6ebf8"
	* I0310 00:44:09.274126 1085855 logs.go:122] Gathering logs for coredns [63e14282682c] ...
	* I0310 00:44:09.274182 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 63e14282682c"
	* I0310 00:44:09.325625 1085855 logs.go:122] Gathering logs for kube-scheduler [cb950dda7587] ...
	* I0310 00:44:09.325667 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cb950dda7587"
	* I0310 00:44:09.383577 1085855 logs.go:122] Gathering logs for kube-proxy [0bd04dbed725] ...
	* I0310 00:44:09.383611 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 0bd04dbed725"
	* I0310 00:44:09.433746 1085855 logs.go:122] Gathering logs for Docker ...
	* I0310 00:44:09.433781 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	* I0310 00:44:09.460564 1085855 logs.go:122] Gathering logs for container status ...
	* I0310 00:44:09.460605 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 00:44:09.497009 1085855 logs.go:122] Gathering logs for kubelet ...
	* I0310 00:44:09.497048 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	* W0310 00:44:09.542478 1085855 logs.go:137] Found kubelet problem: Mar 10 00:42:58 addons-20210310004204-1084876 kubelet[2310]: E0310 00:42:58.540335    2310 reflector.go:138] object-"kube-system"/"snapshot-controller-token-44ggx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "snapshot-controller-token-44ggx" is forbidden: User "system:node:addons-20210310004204-1084876" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210310004204-1084876' and this object
	* W0310 00:44:09.567588 1085855 logs.go:137] Found kubelet problem: Mar 10 00:43:26 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:26.234741    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* W0310 00:44:09.568070 1085855 logs.go:137] Found kubelet problem: Mar 10 00:43:27 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:27.249691    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* I0310 00:44:09.586728 1085855 logs.go:122] Gathering logs for describe nodes ...
	* I0310 00:44:09.586772 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	* W0310 00:44:09.711291 1085855 out.go:191] X Problems detected in kubelet:
	* W0310 00:44:09.711354 1085855 out.go:191]   - Mar 10 00:42:58 addons-20210310004204-1084876 kubelet[2310]: E0310 00:42:58.540335    2310 reflector.go:138] object-"kube-system"/"snapshot-controller-token-44ggx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "snapshot-controller-token-44ggx" is forbidden: User "system:node:addons-20210310004204-1084876" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210310004204-1084876' and this object
	* W0310 00:44:09.711420 1085855 out.go:191]   - Mar 10 00:43:26 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:26.234741    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* W0310 00:44:09.711473 1085855 out.go:191]   - Mar 10 00:43:27 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:27.249691    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* I0310 00:44:19.714467 1085855 api_server.go:221] Checking apiserver healthz at https://192.168.49.205:8443/healthz ...
	* I0310 00:44:19.722053 1085855 api_server.go:241] https://192.168.49.205:8443/healthz returned 200:
	* ok
	* I0310 00:44:19.723341 1085855 api_server.go:137] control plane version: v1.20.2
	* I0310 00:44:19.723366 1085855 api_server.go:127] duration metric: took 11.186794732s to wait for apiserver health ...
	* I0310 00:44:19.723377 1085855 system_pods.go:41] waiting for kube-system pods to appear ...
	* I0310 00:44:19.723437 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format=
	* I0310 00:44:19.771039 1085855 logs.go:255] 1 containers: [924189b6ebf8]
	* I0310 00:44:19.771122 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format=
	* I0310 00:44:19.818437 1085855 logs.go:255] 1 containers: [fcd904ed9876]
	* I0310 00:44:19.818528 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format=
	* I0310 00:44:19.865536 1085855 logs.go:255] 1 containers: [63e14282682c]
	* I0310 00:44:19.865642 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format=
	* I0310 00:44:19.915183 1085855 logs.go:255] 1 containers: [cb950dda7587]
	* I0310 00:44:19.915256 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 00:44:19.963315 1085855 logs.go:255] 1 containers: [0bd04dbed725]
	* I0310 00:44:19.963403 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 00:44:20.009629 1085855 logs.go:255] 0 containers: []
	* W0310 00:44:20.009666 1085855 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 00:44:20.009733 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=
	* I0310 00:44:20.056733 1085855 logs.go:255] 1 containers: [e4d14e816543]
	* I0310 00:44:20.056838 1085855 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format=
	* I0310 00:44:20.103540 1085855 logs.go:255] 1 containers: [d1c1a132de64]
	* I0310 00:44:20.103577 1085855 logs.go:122] Gathering logs for Docker ...
	* I0310 00:44:20.103589 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	* I0310 00:44:20.129887 1085855 logs.go:122] Gathering logs for describe nodes ...
	* I0310 00:44:20.129923 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	* I0310 00:44:20.255836 1085855 logs.go:122] Gathering logs for etcd [fcd904ed9876] ...
	* I0310 00:44:20.255896 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 fcd904ed9876"
	* I0310 00:44:20.310887 1085855 logs.go:122] Gathering logs for kube-proxy [0bd04dbed725] ...
	* I0310 00:44:20.310921 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 0bd04dbed725"
	* I0310 00:44:20.360771 1085855 logs.go:122] Gathering logs for storage-provisioner [e4d14e816543] ...
	* I0310 00:44:20.360808 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 e4d14e816543"
	* I0310 00:44:20.411461 1085855 logs.go:122] Gathering logs for kube-scheduler [cb950dda7587] ...
	* I0310 00:44:20.411495 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cb950dda7587"
	* I0310 00:44:20.463327 1085855 logs.go:122] Gathering logs for kube-controller-manager [d1c1a132de64] ...
	* I0310 00:44:20.463362 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d1c1a132de64"
	* I0310 00:44:20.538244 1085855 logs.go:122] Gathering logs for container status ...
	* I0310 00:44:20.538287 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 00:44:20.576095 1085855 logs.go:122] Gathering logs for kubelet ...
	* I0310 00:44:20.576130 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	* W0310 00:44:20.621417 1085855 logs.go:137] Found kubelet problem: Mar 10 00:42:58 addons-20210310004204-1084876 kubelet[2310]: E0310 00:42:58.540335    2310 reflector.go:138] object-"kube-system"/"snapshot-controller-token-44ggx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "snapshot-controller-token-44ggx" is forbidden: User "system:node:addons-20210310004204-1084876" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210310004204-1084876' and this object
	* W0310 00:44:20.645239 1085855 logs.go:137] Found kubelet problem: Mar 10 00:43:26 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:26.234741    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* W0310 00:44:20.645733 1085855 logs.go:137] Found kubelet problem: Mar 10 00:43:27 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:27.249691    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* I0310 00:44:20.664100 1085855 logs.go:122] Gathering logs for dmesg ...
	* I0310 00:44:20.664137 1085855 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	* I0310 00:44:20.685319 1085855 logs.go:122] Gathering logs for kube-apiserver [924189b6ebf8] ...
	* I0310 00:44:20.685356 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 924189b6ebf8"
	* I0310 00:44:20.768590 1085855 logs.go:122] Gathering logs for coredns [63e14282682c] ...
	* I0310 00:44:20.768630 1085855 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 63e14282682c"
	* W0310 00:44:20.817031 1085855 out.go:191] X Problems detected in kubelet:
	* W0310 00:44:20.817098 1085855 out.go:191]   - Mar 10 00:42:58 addons-20210310004204-1084876 kubelet[2310]: E0310 00:42:58.540335    2310 reflector.go:138] object-"kube-system"/"snapshot-controller-token-44ggx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "snapshot-controller-token-44ggx" is forbidden: User "system:node:addons-20210310004204-1084876" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210310004204-1084876' and this object
	* W0310 00:44:20.817153 1085855 out.go:191]   - Mar 10 00:43:26 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:26.234741    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* W0310 00:44:20.817215 1085855 out.go:191]   - Mar 10 00:43:27 addons-20210310004204-1084876 kubelet[2310]: E0310 00:43:27.249691    2310 pod_workers.go:191] Error syncing pod e69c3e48-b843-40e7-8bc9-6b7110bdffd4 ("ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"), skipping: failed to "StartContainer" for "patch" with CrashLoopBackOff: "back-off 10s restarting failed container=patch pod=ingress-nginx-admission-patch-fswz4_kube-system(e69c3e48-b843-40e7-8bc9-6b7110bdffd4)"
	* I0310 00:44:30.832056 1085855 system_pods.go:57] 21 kube-system pods found
	* I0310 00:44:30.832123 1085855 system_pods.go:59] "coredns-74ff55c5b-xlj4r" [d5d7ae3f-df10-464c-aaeb-cd2250d05a6d] Running
	* I0310 00:44:30.832130 1085855 system_pods.go:59] "csi-hostpath-attacher-0" [41e91ce4-ce56-4bc8-8902-afe26d0dbfb8] Running
	* I0310 00:44:30.832135 1085855 system_pods.go:59] "csi-hostpath-provisioner-0" [f2e72657-17c9-4993-bf6d-09548079e290] Running
	* I0310 00:44:30.832141 1085855 system_pods.go:59] "csi-hostpath-resizer-0" [87c3329d-d656-4e28-a394-ca1dc82c4091] Running
	* I0310 00:44:30.832146 1085855 system_pods.go:59] "csi-hostpath-snapshotter-0" [20c41e58-35d0-4789-98b9-9f0028fd9f84] Running
	* I0310 00:44:30.832150 1085855 system_pods.go:59] "csi-hostpathplugin-0" [ffb79689-4361-442a-a04e-2a42cdff9423] Running
	* I0310 00:44:30.832155 1085855 system_pods.go:59] "etcd-addons-20210310004204-1084876" [75a9cc59-425f-401a-8b80-d1ab9c33e0e6] Running
	* I0310 00:44:30.832164 1085855 system_pods.go:59] "ingress-nginx-admission-create-tw5ql" [5d50f961-79cf-4f1f-a451-8cc9d398cc8b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	* I0310 00:44:30.832207 1085855 system_pods.go:59] "ingress-nginx-admission-patch-fswz4" [e69c3e48-b843-40e7-8bc9-6b7110bdffd4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	* I0310 00:44:30.832212 1085855 system_pods.go:59] "ingress-nginx-controller-65cf89dc4f-v4q4w" [011dacf3-7f46-4baa-835b-48ada5eca434] Running
	* I0310 00:44:30.832223 1085855 system_pods.go:59] "kube-apiserver-addons-20210310004204-1084876" [a2a7af64-f8cb-4766-bb28-02f5634b08bc] Running
	* I0310 00:44:30.832229 1085855 system_pods.go:59] "kube-controller-manager-addons-20210310004204-1084876" [5928b5cf-5ab3-4106-aac0-f292d3e02e59] Running
	* I0310 00:44:30.832237 1085855 system_pods.go:59] "kube-proxy-dmzxd" [7ee74d1c-6552-4a6f-9838-9fa762359752] Running
	* I0310 00:44:30.832242 1085855 system_pods.go:59] "kube-scheduler-addons-20210310004204-1084876" [34eca9ef-d050-466b-92b8-c0e8902582ba] Running
	* I0310 00:44:30.832250 1085855 system_pods.go:59] "metrics-server-56c4f8c9d6-s86jb" [f3f5927f-9bfd-4423-a9ef-ec58fb96681d] Running
	* I0310 00:44:30.832254 1085855 system_pods.go:59] "registry-bqsz2" [28fe6c05-7f77-44d8-a3d2-839555a750a4] Running
	* I0310 00:44:30.832259 1085855 system_pods.go:59] "registry-proxy-jgjdk" [450bf9c8-1e1d-41f4-a105-2836463b7ccd] Running
	* I0310 00:44:30.832271 1085855 system_pods.go:59] "snapshot-controller-66df655854-bl5n2" [45796b79-e6e1-42c7-a9f6-ae4c18cfaf04] Running
	* I0310 00:44:30.832275 1085855 system_pods.go:59] "snapshot-controller-66df655854-shthb" [43951854-067b-47b1-8bbc-c5cd3a2a8f5d] Running
	* I0310 00:44:30.832280 1085855 system_pods.go:59] "storage-provisioner" [2306cc2b-4127-410f-b862-d75f5923ec76] Running
	* I0310 00:44:30.832284 1085855 system_pods.go:59] "tiller-deploy-7c86b7fbdf-k4qzh" [1c948044-7876-4077-a225-b7964c8e9b4e] Running
	* I0310 00:44:30.832291 1085855 system_pods.go:72] duration metric: took 11.108906838s to wait for pod list to return data ...
	* I0310 00:44:30.832303 1085855 default_sa.go:33] waiting for default service account to be created ...
	* I0310 00:44:30.835179 1085855 default_sa.go:44] found service account: "default"
	* I0310 00:44:30.835202 1085855 default_sa.go:54] duration metric: took 2.888844ms for default service account to be created ...
	* I0310 00:44:30.835212 1085855 system_pods.go:114] waiting for k8s-apps to be running ...
	* I0310 00:44:30.844509 1085855 system_pods.go:84] 21 kube-system pods found
	* I0310 00:44:30.844542 1085855 system_pods.go:87] "coredns-74ff55c5b-xlj4r" [d5d7ae3f-df10-464c-aaeb-cd2250d05a6d] Running
	* I0310 00:44:30.844550 1085855 system_pods.go:87] "csi-hostpath-attacher-0" [41e91ce4-ce56-4bc8-8902-afe26d0dbfb8] Running
	* I0310 00:44:30.844555 1085855 system_pods.go:87] "csi-hostpath-provisioner-0" [f2e72657-17c9-4993-bf6d-09548079e290] Running
	* I0310 00:44:30.844561 1085855 system_pods.go:87] "csi-hostpath-resizer-0" [87c3329d-d656-4e28-a394-ca1dc82c4091] Running
	* I0310 00:44:30.844568 1085855 system_pods.go:87] "csi-hostpath-snapshotter-0" [20c41e58-35d0-4789-98b9-9f0028fd9f84] Running
	* I0310 00:44:30.844576 1085855 system_pods.go:87] "csi-hostpathplugin-0" [ffb79689-4361-442a-a04e-2a42cdff9423] Running
	* I0310 00:44:30.844583 1085855 system_pods.go:87] "etcd-addons-20210310004204-1084876" [75a9cc59-425f-401a-8b80-d1ab9c33e0e6] Running
	* I0310 00:44:30.844595 1085855 system_pods.go:87] "ingress-nginx-admission-create-tw5ql" [5d50f961-79cf-4f1f-a451-8cc9d398cc8b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	* I0310 00:44:30.844613 1085855 system_pods.go:87] "ingress-nginx-admission-patch-fswz4" [e69c3e48-b843-40e7-8bc9-6b7110bdffd4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	* I0310 00:44:30.844621 1085855 system_pods.go:87] "ingress-nginx-controller-65cf89dc4f-v4q4w" [011dacf3-7f46-4baa-835b-48ada5eca434] Running
	* I0310 00:44:30.844629 1085855 system_pods.go:87] "kube-apiserver-addons-20210310004204-1084876" [a2a7af64-f8cb-4766-bb28-02f5634b08bc] Running
	* I0310 00:44:30.844642 1085855 system_pods.go:87] "kube-controller-manager-addons-20210310004204-1084876" [5928b5cf-5ab3-4106-aac0-f292d3e02e59] Running
	* I0310 00:44:30.844648 1085855 system_pods.go:87] "kube-proxy-dmzxd" [7ee74d1c-6552-4a6f-9838-9fa762359752] Running
	* I0310 00:44:30.844663 1085855 system_pods.go:87] "kube-scheduler-addons-20210310004204-1084876" [34eca9ef-d050-466b-92b8-c0e8902582ba] Running
	* I0310 00:44:30.844675 1085855 system_pods.go:87] "metrics-server-56c4f8c9d6-s86jb" [f3f5927f-9bfd-4423-a9ef-ec58fb96681d] Running
	* I0310 00:44:30.844687 1085855 system_pods.go:87] "registry-bqsz2" [28fe6c05-7f77-44d8-a3d2-839555a750a4] Running
	* I0310 00:44:30.844698 1085855 system_pods.go:87] "registry-proxy-jgjdk" [450bf9c8-1e1d-41f4-a105-2836463b7ccd] Running
	* I0310 00:44:30.844706 1085855 system_pods.go:87] "snapshot-controller-66df655854-bl5n2" [45796b79-e6e1-42c7-a9f6-ae4c18cfaf04] Running
	* I0310 00:44:30.844719 1085855 system_pods.go:87] "snapshot-controller-66df655854-shthb" [43951854-067b-47b1-8bbc-c5cd3a2a8f5d] Running
	* I0310 00:44:30.844731 1085855 system_pods.go:87] "storage-provisioner" [2306cc2b-4127-410f-b862-d75f5923ec76] Running
	* I0310 00:44:30.844742 1085855 system_pods.go:87] "tiller-deploy-7c86b7fbdf-k4qzh" [1c948044-7876-4077-a225-b7964c8e9b4e] Running
	* I0310 00:44:30.844755 1085855 system_pods.go:124] duration metric: took 9.536816ms to wait for k8s-apps to be running ...
	* I0310 00:44:30.844770 1085855 system_svc.go:44] waiting for kubelet service to be running ....
	* I0310 00:44:30.844839 1085855 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	* I0310 00:44:30.857627 1085855 system_svc.go:56] duration metric: took 12.849939ms WaitForService to wait for kubelet.
	* I0310 00:44:30.857652 1085855 node_ready.go:35] waiting 6m0s for node status to be ready ...
	* I0310 00:44:30.861781 1085855 node_ready.go:38] duration metric: took 4.118747ms to wait for WaitForNodeReady...
	* I0310 00:44:30.861810 1085855 kubeadm.go:541] duration metric: took 1m37.293581315s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	* I0310 00:44:30.861836 1085855 node_conditions.go:101] verifying NodePressure condition ...
	* I0310 00:44:30.864347 1085855 node_conditions.go:121] node storage ephemeral capacity is 309568300Ki
	* I0310 00:44:30.864402 1085855 node_conditions.go:122] node cpu capacity is 8
	* I0310 00:44:30.864420 1085855 node_conditions.go:104] duration metric: took 2.577777ms to run NodePressure ...
	* I0310 00:44:30.864432 1085855 start.go:211] waiting for startup goroutines ...
	* I0310 00:44:30.924379 1085855 start.go:460] kubectl: 1.20.4, cluster: 1.20.2 (minor skew: 0)
	* I0310 00:44:30.927607 1085855 out.go:129] * Done! kubectl is now configured to use "addons-20210310004204-1084876" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 00:44:43.707593 1097970 out.go:335] unable to parse "* I0310 00:42:05.126536 1085855 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 00:42:05.126536 1085855 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 00:44:43.730563 1097970 out.go:335] unable to parse "* I0310 00:42:05.228389 1085855 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 00:42:05.228389 1085855 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 00:44:43.789195 1097970 out.go:340] unable to execute * I0310 00:42:06.327636 1085855 cli_runner.go:115] Run: docker network inspect addons-20210310004204-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 00:42:06.327636 1085855 cli_runner.go:115] Run: docker network inspect addons-20210310004204-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:285: executing "* I0310 00:42:06.327636 1085855 cli_runner.go:115] Run: docker network inspect addons-20210310004204-1084876 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:43.794895 1097970 out.go:340] unable to execute * W0310 00:42:06.370252 1085855 cli_runner.go:162] docker network inspect addons-20210310004204-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 00:42:06.370252 1085855 cli_runner.go:162] docker network inspect addons-20210310004204-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:280: executing "* W0310 00:42:06.370252 1085855 cli_runner.go:162] docker network inspect addons-20210310004204-1084876 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:43.833783 1097970 out.go:340] unable to execute * I0310 00:42:06.414668 1085855 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 00:42:06.414668 1085855 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 00:42:06.414668 1085855 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:43.865467 1097970 out.go:335] unable to parse "* I0310 00:42:07.445475 1085855 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 00:42:07.445475 1085855 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 00:44:43.904175 1097970 out.go:340] unable to execute * I0310 00:42:12.168565 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:12.168565 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:12.168565 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:43.911463 1097970 out.go:335] unable to parse "* I0310 00:42:12.213443 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}\n": template: * I0310 00:42:12.213443 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 00:44:43.925254 1097970 out.go:340] unable to execute * I0310 00:42:12.340113 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:12.340113 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:12.340113 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:43.932579 1097970 out.go:335] unable to parse "* I0310 00:42:12.384926 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}\n": template: * I0310 00:42:12.384926 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 00:44:43.979543 1097970 out.go:340] unable to execute * I0310 00:42:12.935588 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:12.935588 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:12.935588 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:43.996826 1097970 out.go:340] unable to execute * I0310 00:42:13.149364 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:13.149364 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:13.149364 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:44.004690 1097970 out.go:335] unable to parse "* I0310 00:42:13.194907 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}\n": template: * I0310 00:42:13.194907 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 00:44:44.021581 1097970 out.go:340] unable to execute * I0310 00:42:13.310600 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:13.310600 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:13.310600 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:44.028381 1097970 out.go:335] unable to parse "* I0310 00:42:13.355853 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}\n": template: * I0310 00:42:13.355853 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 00:44:44.243578 1097970 out.go:340] unable to execute * I0310 00:42:13.480216 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:13.480216 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:13.480216 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:44.251806 1097970 out.go:335] unable to parse "* I0310 00:42:13.525007 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}\n": template: * I0310 00:42:13.525007 1085855 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fc080] 0x7fc040 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 00:44:44.437622 1097970 out.go:340] unable to execute * I0310 00:42:14.243884 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:14.243884 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:14.243884 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:44.468787 1097970 out.go:340] unable to execute * I0310 00:42:14.422143 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:14.422143 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:14.422143 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:44.487944 1097970 out.go:340] unable to execute * I0310 00:42:14.595826 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:14.595826 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:14.595826 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:44.493812 1097970 out.go:340] unable to execute * I0310 00:42:14.595877 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:14.595877 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:14.595877 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:44.529308 1097970 out.go:340] unable to execute * I0310 00:42:14.963015 1085855 cli_runner.go:115] Run: docker network inspect addons-20210310004204-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 00:42:14.963015 1085855 cli_runner.go:115] Run: docker network inspect addons-20210310004204-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:285: executing "* I0310 00:42:14.963015 1085855 cli_runner.go:115] Run: docker network inspect addons-20210310004204-1084876 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:45.145470 1097970 out.go:340] unable to execute * I0310 00:42:53.671054 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:53.671054 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:53.671054 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:45.157929 1097970 out.go:340] unable to execute * I0310 00:42:53.687634 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:53.687634 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:53.687634 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:45.170522 1097970 out.go:340] unable to execute * I0310 00:42:53.692233 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:53.692233 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:53.692233 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:45.182558 1097970 out.go:340] unable to execute * I0310 00:42:53.698082 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:53.698082 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:53.698082 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:45.221768 1097970 out.go:340] unable to execute * I0310 00:42:53.720935 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:53.720935 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:53.720935 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:45.239627 1097970 out.go:340] unable to execute * I0310 00:42:53.728155 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:53.728155 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:53.728155 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:45.251452 1097970 out.go:340] unable to execute * I0310 00:42:53.730523 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:53.730523 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:53.730523 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:45.263442 1097970 out.go:340] unable to execute * I0310 00:42:53.774957 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:53.774957 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:53.774957 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 00:44:45.274112 1097970 out.go:340] unable to execute * I0310 00:42:53.788160 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	: template: * I0310 00:42:53.788160 1085855 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210310004204-1084876
	:1:96: executing "* I0310 00:42:53.788160 1085855 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" addons-20210310004204-1084876\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210310004204-1084876 -n addons-20210310004204-1084876
helpers_test.go:257: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-tw5ql ingress-nginx-admission-patch-fswz4
helpers_test.go:265: ======> post-mortem[TestAddons/parallel/GCPAuth]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context addons-20210310004204-1084876 describe pod nginx task-pv-pod ingress-nginx-admission-create-tw5ql ingress-nginx-admission-patch-fswz4
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context addons-20210310004204-1084876 describe pod nginx task-pv-pod ingress-nginx-admission-create-tw5ql ingress-nginx-admission-patch-fswz4: exit status 1 (100.661351ms)

                                                
                                                
-- stdout --
	Name:         nginx
	Namespace:    default
	Priority:     0
	Node:         addons-20210310004204-1084876/192.168.49.205
	Start Time:   Wed, 10 Mar 2021 00:44:45 +0000
	Labels:       run=nginx
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sdsp9 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  default-token-sdsp9:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-sdsp9
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/nginx to addons-20210310004204-1084876
	  Normal  Pulling    1s    kubelet            Pulling image "nginx:alpine"
	
	
	Name:         task-pv-pod
	Namespace:    default
	Priority:     0
	Node:         addons-20210310004204-1084876/192.168.49.205
	Start Time:   Wed, 10 Mar 2021 00:44:41 +0000
	Labels:       app=task-pv-pod
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sdsp9 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  default-token-sdsp9:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-sdsp9
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason                  Age   From                     Message
	  ----    ------                  ----  ----                     -------
	  Normal  Scheduled               6s    default-scheduler        Successfully assigned default/task-pv-pod to addons-20210310004204-1084876
	  Normal  SuccessfulAttachVolume  6s    attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-b5b99d04-5b8d-40b6-9d4a-6c4815dcf8c9"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tw5ql" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fswz4" not found

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context addons-20210310004204-1084876 describe pod nginx task-pv-pod ingress-nginx-admission-create-tw5ql ingress-nginx-admission-patch-fswz4: exit status 1
--- FAIL: TestAddons/parallel/GCPAuth (16.40s)

                                                
                                    
x
+
TestErrorSpam (74.48s)

                                                
                                                
=== RUN   TestErrorSpam
=== PAUSE TestErrorSpam

                                                
                                                

                                                
                                                
=== CONT  TestErrorSpam
error_spam_test.go:64: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210310011543-1084876 -n=1 --memory=2250 --wait=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestErrorSpam
error_spam_test.go:64: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210310011543-1084876 -n=1 --memory=2250 --wait=false --driver=docker  --container-runtime=docker: (1m7.123514865s)
error_spam_test.go:74: acceptable stderr: "! Your cgroup does not allow setting memory."
error_spam_test.go:79: unexpected stderr: "! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts"
error_spam_test.go:79: unexpected stderr: "! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:"
error_spam_test.go:79: unexpected stderr: "command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }"
error_spam_test.go:79: unexpected stderr: "err     : Process exited with status 1"
error_spam_test.go:79: unexpected stderr: "output  : --- /lib/systemd/system/docker.service\t2021-01-29 14:31:32.000000000 +0000"
error_spam_test.go:79: unexpected stderr: "+++ /lib/systemd/system/docker.service.new\t2021-03-10 01:16:00.403163612 +0000"
error_spam_test.go:79: unexpected stderr: "@@ -1,30 +1,32 @@"
error_spam_test.go:79: unexpected stderr: " [Unit]"
error_spam_test.go:79: unexpected stderr: " Description=Docker Application Container Engine"
error_spam_test.go:79: unexpected stderr: " Documentation=https://docs.docker.com"
error_spam_test.go:79: unexpected stderr: "+BindsTo=containerd.service"
error_spam_test.go:79: unexpected stderr: " After=network-online.target firewalld.service containerd.service"
error_spam_test.go:79: unexpected stderr: " Wants=network-online.target"
error_spam_test.go:79: unexpected stderr: "-Requires=docker.socket containerd.service"
error_spam_test.go:79: unexpected stderr: "+Requires=docker.socket"
error_spam_test.go:79: unexpected stderr: "+StartLimitBurst=3"
error_spam_test.go:79: unexpected stderr: "+StartLimitIntervalSec=60"
error_spam_test.go:79: unexpected stderr: " [Service]"
error_spam_test.go:79: unexpected stderr: " Type=notify"
error_spam_test.go:79: unexpected stderr: "-# the default is not to use systemd for cgroups because the delegate issues still"
error_spam_test.go:79: unexpected stderr: "-# exists and systemd currently does not support the cgroup feature set required"
error_spam_test.go:79: unexpected stderr: "-# for containers run by docker"
error_spam_test.go:79: unexpected stderr: "-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock"
error_spam_test.go:79: unexpected stderr: "-ExecReload=/bin/kill -s HUP $MAINPID"
error_spam_test.go:79: unexpected stderr: "-TimeoutSec=0"
error_spam_test.go:79: unexpected stderr: "-RestartSec=2"
error_spam_test.go:79: unexpected stderr: "-Restart=always"
error_spam_test.go:79: unexpected stderr: "-"
error_spam_test.go:79: unexpected stderr: "-# Note that StartLimit* options were moved from \"Service\" to \"Unit\" in systemd 229."
error_spam_test.go:79: unexpected stderr: "-# Both the old, and new location are accepted by systemd 229 and up, so using the old location"
error_spam_test.go:79: unexpected stderr: "-# to make them work for either version of systemd."
error_spam_test.go:79: unexpected stderr: "-StartLimitBurst=3"
error_spam_test.go:79: unexpected stderr: "+Restart=on-failure"
error_spam_test.go:79: unexpected stderr: "-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230."
error_spam_test.go:79: unexpected stderr: "-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make"
error_spam_test.go:79: unexpected stderr: "-# this option work for either version of systemd."
error_spam_test.go:79: unexpected stderr: "-StartLimitInterval=60s"
error_spam_test.go:79: unexpected stderr: "+"
error_spam_test.go:79: unexpected stderr: "+"
error_spam_test.go:79: unexpected stderr: "+# This file is a systemd drop-in unit that inherits from the base dockerd configuration."
error_spam_test.go:79: unexpected stderr: "+# The base configuration already specifies an 'ExecStart=...' command. The first directive"
error_spam_test.go:79: unexpected stderr: "+# here is to clear out that command inherited from the base configuration. Without this,"
error_spam_test.go:79: unexpected stderr: "+# the command from the base configuration and the command specified here are treated as"
error_spam_test.go:79: unexpected stderr: "+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd"
error_spam_test.go:79: unexpected stderr: "+# will catch this invalid input and refuse to start the service with an error like:"
error_spam_test.go:79: unexpected stderr: "+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services."
error_spam_test.go:79: unexpected stderr: "+"
error_spam_test.go:79: unexpected stderr: "+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other"
error_spam_test.go:79: unexpected stderr: "+# container runtimes. If left unlimited, it may result in OOM issues with MySQL."
error_spam_test.go:79: unexpected stderr: "+ExecStart="
error_spam_test.go:79: unexpected stderr: "+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 "
error_spam_test.go:79: unexpected stderr: "+ExecReload=/bin/kill -s HUP $MAINPID"
error_spam_test.go:79: unexpected stderr: " # Having non-zero Limit*s causes performance problems due to accounting overhead"
error_spam_test.go:79: unexpected stderr: " # in the kernel. We recommend using cgroups to do container-local accounting."
error_spam_test.go:79: unexpected stderr: "@@ -32,16 +34,16 @@"
error_spam_test.go:79: unexpected stderr: " LimitNPROC=infinity"
error_spam_test.go:79: unexpected stderr: " LimitCORE=infinity"
error_spam_test.go:79: unexpected stderr: "-# Comment TasksMax if your systemd version does not support it."
error_spam_test.go:79: unexpected stderr: "-# Only systemd 226 and above support this option."
error_spam_test.go:79: unexpected stderr: "+# Uncomment TasksMax if your systemd version supports it."
error_spam_test.go:79: unexpected stderr: "+# Only systemd 226 and above support this version."
error_spam_test.go:79: unexpected stderr: " TasksMax=infinity"
error_spam_test.go:79: unexpected stderr: "+TimeoutStartSec=0"
error_spam_test.go:79: unexpected stderr: " # set delegate yes so that systemd does not reset the cgroups of docker containers"
error_spam_test.go:79: unexpected stderr: " Delegate=yes"
error_spam_test.go:79: unexpected stderr: " # kill only the docker process, not all processes in the cgroup"
error_spam_test.go:79: unexpected stderr: " KillMode=process"
error_spam_test.go:79: unexpected stderr: "-OOMScoreAdjust=-500"
error_spam_test.go:79: unexpected stderr: " [Install]"
error_spam_test.go:79: unexpected stderr: " WantedBy=multi-user.target"
error_spam_test.go:79: unexpected stderr: "Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install."
error_spam_test.go:79: unexpected stderr: "Executing: /lib/systemd/systemd-sysv-install enable docker"
error_spam_test.go:79: unexpected stderr: "Job for docker.service failed because the control process exited with error code."
error_spam_test.go:79: unexpected stderr: "See \"systemctl status docker.service\" and \"journalctl -xe\" for details."
error_spam_test.go:93: minikube stdout:
* [nospam-20210310011543-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube
- MINIKUBE_LOCATION=10730
* Using the docker driver based on user configuration
- More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
* Starting control plane node nospam-20210310011543-1084876 in cluster nospam-20210310011543-1084876
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Stopping node "nospam-20210310011543-1084876"  ...
* Powering off "nospam-20210310011543-1084876" via SSH ...
* Deleting "nospam-20210310011543-1084876" in docker ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v4
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-20210310011543-1084876" cluster and "default" namespace by default
error_spam_test.go:94: minikube stderr:
! Your cgroup does not allow setting memory.
! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err     : Process exited with status 1
output  : --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2021-03-10 01:16:00.403163612 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

                                                
                                                
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500

                                                
                                                
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

                                                
                                                
error_spam_test.go:107: *** TestErrorSpam FAILED at 2021-03-10 01:16:50.629215659 +0000 UTC m=+2120.728750976
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestErrorSpam]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect nospam-20210310011543-1084876
helpers_test.go:231: (dbg) docker inspect nospam-20210310011543-1084876:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "787495e3cdb3a26b2e2c060e8a80ce83ec3c687b0d35eb582f57e38660766504",
	        "Created": "2021-03-10T01:16:16.83060163Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1239400,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T01:16:17.718874045Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/787495e3cdb3a26b2e2c060e8a80ce83ec3c687b0d35eb582f57e38660766504/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/787495e3cdb3a26b2e2c060e8a80ce83ec3c687b0d35eb582f57e38660766504/hostname",
	        "HostsPath": "/var/lib/docker/containers/787495e3cdb3a26b2e2c060e8a80ce83ec3c687b0d35eb582f57e38660766504/hosts",
	        "LogPath": "/var/lib/docker/containers/787495e3cdb3a26b2e2c060e8a80ce83ec3c687b0d35eb582f57e38660766504/787495e3cdb3a26b2e2c060e8a80ce83ec3c687b0d35eb582f57e38660766504-json.log",
	        "Name": "/nospam-20210310011543-1084876",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "nospam-20210310011543-1084876:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "nospam-20210310011543-1084876",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3c4e14d8bcd3a6dd44377d6c2abf20d2d5505daf5dd1991d8e0c4c8edd6a06aa-init/diff:/var/lib/docker/overlay2/28b47fe487a6db3251353ede3b6f69e6964a6f2abeebaa30c0ad1d1e78d6d00a/diff:/var/lib/docker/overlay2/29f807c33e13b428dfb88e0079cb48053d52bc476ea5072dee137978cb12d04a/diff:/var/lib/docker/overlay2/dac80af91649d325e8284c69485d6ff878d5853f575704daf3e34a558a4dda4f/diff:/var/lib/docker/overlay2/df0ce8e6141fb84ed6e57b1c2688a69b8eb9a17fd5ee143949e1fdaed4e2127b/diff:/var/lib/docker/overlay2/aeddbcd65ba884bcaec5c4d9ed1d4d7786ab18c2b63df71ad64387fe05e81f1d/diff:/var/lib/docker/overlay2/7b1d7e6c08ca72dcd115aafabcf39fc5bf8c7ebfef24cea5afd72de3e3aaef74/diff:/var/lib/docker/overlay2/e172241d5c67cd99e30286314f9a7e0bfdbe98e533ace6c30b573c8e7016a37c/diff:/var/lib/docker/overlay2/b92bddb174c1c73ced52b390e387b906f0333b7864874fd7b14b4a81995084e2/diff:/var/lib/docker/overlay2/592238ad80762d7c7fad92dccc0dca54b900e705d75280eab248a5cd75f9e0c9/diff:/var/lib/docker/overlay2/3703a1
9c7e2d92b4b1aa0e6ed88a22e60ae5e4734d51a6ee4120ffb3fd44cedd/diff:/var/lib/docker/overlay2/026c3575d0e91a7ca6ffeac4648df1b4810fd709c6e2cca8baaa56f1240d373a/diff:/var/lib/docker/overlay2/26f9dc404e831d46f04fc64d90165fcb6cf2b626f20d5c6f3c4d192330974443/diff:/var/lib/docker/overlay2/1d4aa7eb8e0fd341ce63a7e0ca03271806a93d7b3ff5f68421a54114f7db7920/diff:/var/lib/docker/overlay2/262ecf385929e321ea03edb42a15ed2009ddac8fe3e6370e83fbb48c9cf2a5a8/diff:/var/lib/docker/overlay2/437e5fda1fb7c52e890750e7d99942571a65211a4d0aeca3e47a312c037ce50c/diff:/var/lib/docker/overlay2/c49137c10ad9355ca71ee15d51fa243c0c5677d7cfc5be7e91e3b6a41f147a44/diff:/var/lib/docker/overlay2/2df3c6c6f614eb15d222c1928d20367e93571cdcc98fce5703c321bbc9e89ada/diff:/var/lib/docker/overlay2/4223138719a89216f8b18bd8209459f6d9da0eef8e14f421b9ac14497e6303fe/diff:/var/lib/docker/overlay2/8c322e276775bec279ce519ab64fdc5d72374dd59f193b4e1f1c64b169dbe95c/diff:/var/lib/docker/overlay2/8835de952c31ba4fb601f762e9fe01ff4f63c9a70cd4cbb66aa33f53f0b6ec65/diff:/var/lib/d
ocker/overlay2/e4d38c30d6aa80c930dc3bfc34876ed425d6f4e5cfa9a2bcb9c79003aaea69ce/diff:/var/lib/docker/overlay2/8b23f70785fc8c9ad799398a641dacde1831c1e8b8902353d8de6fe2df541e91/diff:/var/lib/docker/overlay2/b85b76d6ce7303e7b59902f25ce2b403c9ae01301bbdb51f3c9987b54aa8fab2/diff:/var/lib/docker/overlay2/70e3155bfae885c5e656de33a3952490499dc2d41b3f86d8220b493291996885/diff:/var/lib/docker/overlay2/9b5cbb5d27c2d34162d8b38e5d6585f627dec3775c3017d6ad087f013c951f9d/diff:/var/lib/docker/overlay2/2110e17f930b05e90bca2794e63a1f7910bf640c30fd026509744e74ef97d506/diff:/var/lib/docker/overlay2/790948bf453d8ae59a2cb4892b21787a30df4f980a6e4bf63c5db18db81815ba/diff:/var/lib/docker/overlay2/452aaf1cb28ef124364e99e3ace726f6719100c165f921ee8507f82eb3652e32/diff:/var/lib/docker/overlay2/8502aab369c9748ff5d36f81e9a59c609d5f07f621ee7e01d1f9bd9714381ec6/diff:/var/lib/docker/overlay2/c40c9698a31968efd949a25e8ac993fc7e6185124270c526860ffdbe13a7c356/diff:/var/lib/docker/overlay2/fd2db339b2f787338c29e87998a27397b2d1b6616f3ca8deeacef9be144
d6616/diff:/var/lib/docker/overlay2/57ab026e96dbcabc281c2c582254053bae73a4c69d2eca845047871bd5406288/diff:/var/lib/docker/overlay2/ff280a1ef7fc06c9015daf55cc9e56d3d0818daf0bd8f3d767415cb4681d40cd/diff:/var/lib/docker/overlay2/0409d6000dc4a61c33927b65be0ef24aab292a9b8c7f1156dd59952031ec958a/diff:/var/lib/docker/overlay2/2bb7adee4012b2bbda639d5e5169236c33e1f38bf28e8475d73a21340e2073c4/diff:/var/lib/docker/overlay2/26aa983703a7aa2bc7b698b7fc9efd858ecd26ec7dd93e9d89a75272e577fa9f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c4e14d8bcd3a6dd44377d6c2abf20d2d5505daf5dd1991d8e0c4c8edd6a06aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c4e14d8bcd3a6dd44377d6c2abf20d2d5505daf5dd1991d8e0c4c8edd6a06aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c4e14d8bcd3a6dd44377d6c2abf20d2d5505daf5dd1991d8e0c4c8edd6a06aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "nospam-20210310011543-1084876",
	                "Source": "/var/lib/docker/volumes/nospam-20210310011543-1084876/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "nospam-20210310011543-1084876",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "nospam-20210310011543-1084876",
	                "name.minikube.sigs.k8s.io": "nospam-20210310011543-1084876",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d15f92d1a1c07ef69db7199df7ddcfc0ca0288ae0000a35e9ac91b94e229eb7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33587"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33586"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33583"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33585"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33584"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1d15f92d1a1c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "nospam-20210310011543-1084876": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.205"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "787495e3cdb3"
	                    ],
	                    "NetworkID": "437bb43d80acb55bc730111db1382b5494a5602101ff1c1a4dcdf7799e5dc028",
	                    "EndpointID": "2872ea80ae1413423ca1a855b8e4febcb91c1b61b9ecb16de31b1de008851305",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.205",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:cd",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p nospam-20210310011543-1084876 -n nospam-20210310011543-1084876
helpers_test.go:240: <<< TestErrorSpam FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestErrorSpam]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210310011543-1084876 logs -n 25

                                                
                                                
=== CONT  TestErrorSpam
helpers_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210310011543-1084876 logs -n 25: (2.571235423s)
helpers_test.go:248: TestErrorSpam logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 01:16:18 UTC, end at Wed 2021-03-10 01:16:51 UTC. --
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[202]: time="2021-03-10T01:16:24.874624413Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[202]: time="2021-03-10T01:16:24.875922371Z" level=info msg="Daemon shutdown complete"
	* Mar 10 01:16:24 nospam-20210310011543-1084876 systemd[1]: docker.service: Succeeded.
	* Mar 10 01:16:24 nospam-20210310011543-1084876 systemd[1]: Stopped Docker Application Container Engine.
	* Mar 10 01:16:24 nospam-20210310011543-1084876 systemd[1]: Starting Docker Application Container Engine...
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:24.954937532Z" level=info msg="Starting up"
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:24.957060220Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:24.957104174Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:24.957136757Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:24.957154079Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:24.958586458Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:24.958630352Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:24.958660658Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:24.958678783Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 01:16:24 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:24.993354077Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 01:16:25 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:25.002761728Z" level=warning msg="Your kernel does not support swap memory limit"
	* Mar 10 01:16:25 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:25.002800033Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	* Mar 10 01:16:25 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:25.003021639Z" level=info msg="Loading containers: start."
	* Mar 10 01:16:25 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:25.110689963Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 01:16:25 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:25.153980407Z" level=info msg="Loading containers: done."
	* Mar 10 01:16:25 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:25.181290895Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 01:16:25 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:25.181362681Z" level=info msg="Daemon has completed initialization"
	* Mar 10 01:16:25 nospam-20210310011543-1084876 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 01:16:25 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:25.199194187Z" level=info msg="API listen on [::]:2376"
	* Mar 10 01:16:25 nospam-20210310011543-1084876 dockerd[483]: time="2021-03-10T01:16:25.204471383Z" level=info msg="API listen on /var/run/docker.sock"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 94c928c56421d       a27166429d98e       14 seconds ago      Running             kube-controller-manager   0                   97ce5d844818d
	* 14e043f1fe08a       0369cf4303ffd       14 seconds ago      Running             etcd                      0                   1624a2861be93
	* c44289b8d50f6       ed2c44fbdd78b       14 seconds ago      Running             kube-scheduler            0                   7357275d2a000
	* e5c1c126a37b3       a8c2fdb8bf76e       14 seconds ago      Running             kube-apiserver            0                   e61477e2a3c32
	* 
	* ==> describe nodes <==
	* Name:               nospam-20210310011543-1084876
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=nospam-20210310011543-1084876
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=8d9e062aa56d18f701a92d5344bd63e9d7a0bc2e
	*                     minikube.k8s.io/name=nospam-20210310011543-1084876
	*                     minikube.k8s.io/updated_at=2021_03_10T01_16_48_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 01:16:44 +0000
	* Taints:             node.kubernetes.io/not-ready:NoSchedule
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  nospam-20210310011543-1084876
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 01:16:48 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 01:16:49 +0000   Wed, 10 Mar 2021 01:16:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 01:16:49 +0000   Wed, 10 Mar 2021 01:16:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 01:16:49 +0000   Wed, 10 Mar 2021 01:16:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            False   Wed, 10 Mar 2021 01:16:49 +0000   Wed, 10 Mar 2021 01:16:49 +0000   KubeletNotReady              container runtime status check may not have completed yet
	* Addresses:
	*   InternalIP:  192.168.59.205
	*   Hostname:    nospam-20210310011543-1084876
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30886996Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30886996Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                1006e95c-542f-4632-9e0f-3d96cc8b8bea
	*   Boot ID:                    cfed3db4-db6c-4655-8abe-2e1ce08d21a8
	*   Kernel Version:             4.9.0-15-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* Non-terminated Pods:          (4 in total)
	*   Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	*   kube-system                 etcd-nospam-20210310011543-1084876                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3s
	*   kube-system                 kube-apiserver-nospam-20210310011543-1084876             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3s
	*   kube-system                 kube-controller-manager-nospam-20210310011543-1084876    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3s
	*   kube-system                 kube-scheduler-nospam-20210310011543-1084876             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                650m (8%)   0 (0%)
	*   memory             100Mi (0%)  0 (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                From     Message
	*   ----    ------                   ----               ----     -------
	*   Normal  NodeHasSufficientMemory  16s (x5 over 16s)  kubelet  Node nospam-20210310011543-1084876 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    16s (x5 over 16s)  kubelet  Node nospam-20210310011543-1084876 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     16s (x4 over 16s)  kubelet  Node nospam-20210310011543-1084876 status is now: NodeHasSufficientPID
	*   Normal  Starting                 4s                 kubelet  Starting kubelet.
	*   Normal  NodeHasSufficientMemory  3s                 kubelet  Node nospam-20210310011543-1084876 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3s                 kubelet  Node nospam-20210310011543-1084876 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3s                 kubelet  Node nospam-20210310011543-1084876 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             3s                 kubelet  Node nospam-20210310011543-1084876 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  3s                 kubelet  Updated Node Allocatable limit across pods
	* 
	* ==> dmesg <==
	* [  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fdbaae534b92
	* [  +0.000001] ll header: 00000000: 02 42 7f a0 20 dc 02 42 c0 a8 31 cd 08 00        .B.. ..B..1...
	* [  +0.003965] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fdbaae534b92
	* [  +0.000003] ll header: 00000000: 02 42 7f a0 20 dc 02 42 c0 a8 31 cd 08 00        .B.. ..B..1...
	* [  +8.187452] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fdbaae534b92
	* [  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fdbaae534b92
	* [  +0.000003] ll header: 00000000: 02 42 7f a0 20 dc 02 42 c0 a8 31 cd 08 00        .B.. ..B..1...
	* [  +0.000001] ll header: 00000000: 02 42 7f a0 20 dc 02 42 c0 a8 31 cd 08 00        .B.. ..B..1...
	* [  +0.000025] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fdbaae534b92
	* [  +0.000003] ll header: 00000000: 02 42 7f a0 20 dc 02 42 c0 a8 31 cd 08 00        .B.. ..B..1...
	* [Mar10 01:07] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth8223c2c3
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 76 97 38 7d 39 ee 08 06        ......v.8}9...
	* [  +0.039142] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth9362de6d
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff a6 80 e7 17 6f fb 08 06        ..........o...
	* [ +57.049439] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 01:08] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 01:11] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 01:13] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 01:14] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 01:15] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +12.392357] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.006993] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.692155] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.745408] cgroup: cgroup2: unknown option "nsdelegate"
	* [Mar10 01:16] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [14e043f1fe08] <==
	* raft2021/03/10 01:16:37 INFO: newRaft 345d659c5db4c55e [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	* raft2021/03/10 01:16:37 INFO: 345d659c5db4c55e became follower at term 1
	* raft2021/03/10 01:16:37 INFO: 345d659c5db4c55e switched to configuration voters=(3773283785067775326)
	* 2021-03-10 01:16:37.882668 W | auth: simple token is not cryptographically signed
	* 2021-03-10 01:16:37.936043 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* 2021-03-10 01:16:37.939644 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2021-03-10 01:16:37.939855 I | embed: listening for metrics on http://127.0.0.1:2381
	* 2021-03-10 01:16:37.940163 I | embed: listening for peers on 192.168.59.205:2380
	* 2021-03-10 01:16:37.940309 I | etcdserver: 345d659c5db4c55e as single-node; fast-forwarding 9 ticks (election ticks 10)
	* raft2021/03/10 01:16:37 INFO: 345d659c5db4c55e switched to configuration voters=(3773283785067775326)
	* 2021-03-10 01:16:37.940844 I | etcdserver/membership: added member 345d659c5db4c55e [https://192.168.59.205:2380] to cluster aaa01572b453e1d6
	* raft2021/03/10 01:16:38 INFO: 345d659c5db4c55e is starting a new election at term 1
	* raft2021/03/10 01:16:38 INFO: 345d659c5db4c55e became candidate at term 2
	* raft2021/03/10 01:16:38 INFO: 345d659c5db4c55e received MsgVoteResp from 345d659c5db4c55e at term 2
	* raft2021/03/10 01:16:38 INFO: 345d659c5db4c55e became leader at term 2
	* raft2021/03/10 01:16:38 INFO: raft.node: 345d659c5db4c55e elected leader 345d659c5db4c55e at term 2
	* 2021-03-10 01:16:38.180217 I | etcdserver: setting up the initial cluster version to 3.4
	* 2021-03-10 01:16:38.181191 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2021-03-10 01:16:38.181268 I | etcdserver/api: enabled capabilities for version 3.4
	* 2021-03-10 01:16:38.181307 I | etcdserver: published {Name:nospam-20210310011543-1084876 ClientURLs:[https://192.168.59.205:2379]} to cluster aaa01572b453e1d6
	* 2021-03-10 01:16:38.181408 I | embed: ready to serve client requests
	* 2021-03-10 01:16:38.181507 I | embed: ready to serve client requests
	* 2021-03-10 01:16:38.183440 I | embed: serving client requests on 127.0.0.1:2379
	* 2021-03-10 01:16:38.183462 I | embed: serving client requests on 192.168.59.205:2379
	* 2021-03-10 01:16:44.950121 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-nospam-20210310011543-1084876\" " with result "range_response_count:0 size:4" took too long (100.714114ms) to execute
	* 
	* ==> kernel <==
	*  01:16:52 up  4:59,  0 users,  load average: 9.21, 3.97, 2.42
	* Linux nospam-20210310011543-1084876 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 (2021-03-08) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [e5c1c126a37b] <==
	* I0310 01:16:44.753647       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	* I0310 01:16:44.753668       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I0310 01:16:44.753689       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I0310 01:16:44.832616       1 cache.go:39] Caches are synced for autoregister controller
	* I0310 01:16:44.833236       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I0310 01:16:44.836543       1 shared_informer.go:247] Caches are synced for node_authorizer 
	* I0310 01:16:44.852518       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I0310 01:16:44.852774       1 apf_controller.go:266] Running API Priority and Fairness config worker
	* I0310 01:16:44.853852       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	* I0310 01:16:44.855247       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	* I0310 01:16:44.957561       1 controller.go:609] quota admission added evaluator for: namespaces
	* I0310 01:16:45.710799       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I0310 01:16:45.711003       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I0310 01:16:45.719955       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	* I0310 01:16:45.724620       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	* I0310 01:16:45.724648       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	* I0310 01:16:46.246546       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I0310 01:16:46.281736       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* W0310 01:16:46.391342       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.59.205]
	* I0310 01:16:46.394236       1 controller.go:609] quota admission added evaluator for: endpoints
	* I0310 01:16:46.399717       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	* I0310 01:16:47.225631       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	* I0310 01:16:48.029048       1 controller.go:609] quota admission added evaluator for: deployments.apps
	* I0310 01:16:48.167754       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	* I0310 01:16:48.925410       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	* 
	* ==> kube-controller-manager [94c928c56421] <==
	* Flag --port has been deprecated, see --secure-port instead.
	* I0310 01:16:39.052831       1 serving.go:331] Generated self-signed cert in-memory
	* I0310 01:16:40.947228       1 controllermanager.go:176] Version: v1.20.2
	* I0310 01:16:40.949175       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
	* I0310 01:16:40.949564       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I0310 01:16:40.949606       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I0310 01:16:40.949718       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I0310 01:16:47.219373       1 shared_informer.go:240] Waiting for caches to sync for tokens
	* I0310 01:16:47.319594       1 shared_informer.go:247] Caches are synced for tokens 
	* I0310 01:16:47.333815       1 node_lifecycle_controller.go:77] Sending events to api server
	* E0310 01:16:47.333872       1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
	* W0310 01:16:47.333885       1 controllermanager.go:546] Skipping "cloud-node-lifecycle"
	* I0310 01:16:47.354594       1 controllermanager.go:554] Started "attachdetach"
	* W0310 01:16:47.354625       1 controllermanager.go:546] Skipping "ttl-after-finished"
	* I0310 01:16:47.354773       1 attach_detach_controller.go:328] Starting attach detach controller
	* I0310 01:16:47.354786       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	* I0310 01:16:47.449812       1 controllermanager.go:554] Started "podgc"
	* I0310 01:16:47.450154       1 gc_controller.go:89] Starting GC controller
	* I0310 01:16:47.450539       1 shared_informer.go:240] Waiting for caches to sync for GC
	* I0310 01:16:47.475490       1 controllermanager.go:554] Started "replicaset"
	* I0310 01:16:47.475604       1 replica_set.go:182] Starting replicaset controller
	* I0310 01:16:47.475625       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	* I0310 01:16:47.481205       1 node_ipam_controller.go:91] Sending events to api server.
	* 
	* ==> kube-scheduler [c44289b8d50f] <==
	* I0310 01:16:39.575249       1 serving.go:331] Generated self-signed cert in-memory
	* W0310 01:16:44.745135       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W0310 01:16:44.745187       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W0310 01:16:44.745205       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W0310 01:16:44.745221       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I0310 01:16:44.945198       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I0310 01:16:44.945471       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 01:16:44.951368       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 01:16:44.945529       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E0310 01:16:44.956384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 01:16:44.956904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 01:16:44.956915       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 01:16:44.957019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 01:16:44.957085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 01:16:44.957143       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 01:16:44.957274       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 01:16:44.959900       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 01:16:44.957541       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 01:16:44.957654       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 01:16:44.959414       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 01:16:44.959609       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 01:16:45.876443       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 01:16:46.014930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 01:16:46.035306       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* I0310 01:16:46.457881       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 01:16:18 UTC, end at Wed 2021-03-10 01:16:52 UTC. --
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.481771    2472 state_mem.go:88] [cpumanager] updated default cpuset: ""
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.481788    2472 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.481801    2472 policy_none.go:43] [cpumanager] none policy: Start
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: W0310 01:16:49.490183    2472 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.490533    2472 plugin_manager.go:114] Starting Kubelet Plugin Manager
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.754829    2472 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.755009    2472 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.755092    2472 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.755139    2472 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833134    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/e20abef27603c3d735e3a3fdc8edd3c0-etcd-data") pod "etcd-nospam-20210310011543-1084876" (UID: "e20abef27603c3d735e3a3fdc8edd3c0")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833253    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/36f253cc3fab8baf8c612ffdcd0c6be6-usr-local-share-ca-certificates") pod "kube-apiserver-nospam-20210310011543-1084876" (UID: "36f253cc3fab8baf8c612ffdcd0c6be6")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833285    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-ca-certs") pod "kube-controller-manager-nospam-20210310011543-1084876" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833318    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-flexvolume-dir") pod "kube-controller-manager-nospam-20210310011543-1084876" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833355    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-k8s-certs") pod "kube-controller-manager-nospam-20210310011543-1084876" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833380    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-etc-ca-certificates") pod "kube-controller-manager-nospam-20210310011543-1084876" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833410    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-usr-share-ca-certificates") pod "kube-controller-manager-nospam-20210310011543-1084876" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833440    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/36f253cc3fab8baf8c612ffdcd0c6be6-etc-ca-certificates") pod "kube-apiserver-nospam-20210310011543-1084876" (UID: "36f253cc3fab8baf8c612ffdcd0c6be6")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833465    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/36f253cc3fab8baf8c612ffdcd0c6be6-k8s-certs") pod "kube-apiserver-nospam-20210310011543-1084876" (UID: "36f253cc3fab8baf8c612ffdcd0c6be6")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833495    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/36f253cc3fab8baf8c612ffdcd0c6be6-usr-share-ca-certificates") pod "kube-apiserver-nospam-20210310011543-1084876" (UID: "36f253cc3fab8baf8c612ffdcd0c6be6")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833521    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-usr-local-share-ca-certificates") pod "kube-controller-manager-nospam-20210310011543-1084876" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833551    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-nospam-20210310011543-1084876" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833572    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/e20abef27603c3d735e3a3fdc8edd3c0-etcd-certs") pod "etcd-nospam-20210310011543-1084876" (UID: "e20abef27603c3d735e3a3fdc8edd3c0")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833595    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/36f253cc3fab8baf8c612ffdcd0c6be6-ca-certs") pod "kube-apiserver-nospam-20210310011543-1084876" (UID: "36f253cc3fab8baf8c612ffdcd0c6be6")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833619    2472 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-kubeconfig") pod "kube-controller-manager-nospam-20210310011543-1084876" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 01:16:49 nospam-20210310011543-1084876 kubelet[2472]: I0310 01:16:49.833631    2472 reconciler.go:157] Reconciler: start to sync state
	* 
	* ==> Audit <==
	* |------------|---------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	|  Command   |                    Args                     |                   Profile                   |  User   | Version |          Start Time           |           End Time            |
	|------------|---------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| node       | add -p                                      | multinode-20210310005641-1084876            | jenkins | v1.18.1 | Wed, 10 Mar 2021 00:59:05 UTC | Wed, 10 Mar 2021 00:59:27 UTC |
	|            | multinode-20210310005641-1084876            |                                             |         |         |                               |                               |
	|            | -v 3 --alsologtostderr                      |                                             |         |         |                               |                               |
	| profile    | list --output json                          | minikube                                    | jenkins | v1.18.1 | Wed, 10 Mar 2021 00:59:27 UTC | Wed, 10 Mar 2021 00:59:28 UTC |
	| -p         | multinode-20210310005641-1084876            | multinode-20210310005641-1084876            | jenkins | v1.18.1 | Wed, 10 Mar 2021 00:59:28 UTC | Wed, 10 Mar 2021 00:59:29 UTC |
	|            | node stop m03                               |                                             |         |         |                               |                               |
	| -p         | multinode-20210310005641-1084876            | multinode-20210310005641-1084876            | jenkins | v1.18.1 | Wed, 10 Mar 2021 00:59:31 UTC | Wed, 10 Mar 2021 01:00:24 UTC |
	|            | node start m03 --alsologtostderr            |                                             |         |         |                               |                               |
	| -p         | multinode-20210310005641-1084876            | multinode-20210310005641-1084876            | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:00:25 UTC | Wed, 10 Mar 2021 01:00:30 UTC |
	|            | node delete m03                             |                                             |         |         |                               |                               |
	| -p         | multinode-20210310005641-1084876            | multinode-20210310005641-1084876            | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:00:31 UTC | Wed, 10 Mar 2021 01:00:38 UTC |
	|            | stop                                        |                                             |         |         |                               |                               |
	| start      | -p                                          | multinode-20210310005641-1084876            | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:00:39 UTC | Wed, 10 Mar 2021 01:08:22 UTC |
	|            | multinode-20210310005641-1084876            |                                             |         |         |                               |                               |
	|            | --wait=true -v=8                            |                                             |         |         |                               |                               |
	|            | --alsologtostderr                           |                                             |         |         |                               |                               |
	|            | --driver=docker                             |                                             |         |         |                               |                               |
	|            | --container-runtime=docker                  |                                             |         |         |                               |                               |
	| start      | -p                                          | multinode-20210310005641-1084876-m03        | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:08:23 UTC | Wed, 10 Mar 2021 01:08:53 UTC |
	|            | multinode-20210310005641-1084876-m03        |                                             |         |         |                               |                               |
	|            | --driver=docker                             |                                             |         |         |                               |                               |
	|            | --container-runtime=docker                  |                                             |         |         |                               |                               |
	| delete     | -p                                          | multinode-20210310005641-1084876-m03        | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:08:53 UTC | Wed, 10 Mar 2021 01:08:56 UTC |
	|            | multinode-20210310005641-1084876-m03        |                                             |         |         |                               |                               |
	| delete     | -p                                          | multinode-20210310005641-1084876            | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:08:56 UTC | Wed, 10 Mar 2021 01:09:01 UTC |
	|            | multinode-20210310005641-1084876            |                                             |         |         |                               |                               |
	| start      | -p                                          | test-preload-20210310011103-1084876         | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:11:03 UTC | Wed, 10 Mar 2021 01:12:36 UTC |
	|            | test-preload-20210310011103-1084876         |                                             |         |         |                               |                               |
	|            | --memory=2200 --alsologtostderr             |                                             |         |         |                               |                               |
	|            | --wait=true --preload=false                 |                                             |         |         |                               |                               |
	|            | --driver=docker                             |                                             |         |         |                               |                               |
	|            | --container-runtime=docker                  |                                             |         |         |                               |                               |
	|            | --kubernetes-version=v1.17.0                |                                             |         |         |                               |                               |
	| ssh        | -p                                          | test-preload-20210310011103-1084876         | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:12:36 UTC | Wed, 10 Mar 2021 01:12:38 UTC |
	|            | test-preload-20210310011103-1084876         |                                             |         |         |                               |                               |
	|            | -- docker pull busybox                      |                                             |         |         |                               |                               |
	| start      | -p                                          | test-preload-20210310011103-1084876         | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:12:38 UTC | Wed, 10 Mar 2021 01:13:06 UTC |
	|            | test-preload-20210310011103-1084876         |                                             |         |         |                               |                               |
	|            | --memory=2200 --alsologtostderr             |                                             |         |         |                               |                               |
	|            | -v=1 --wait=true --driver=docker            |                                             |         |         |                               |                               |
	|            |  --container-runtime=docker                 |                                             |         |         |                               |                               |
	|            | --kubernetes-version=v1.17.3                |                                             |         |         |                               |                               |
	| ssh        | -p                                          | test-preload-20210310011103-1084876         | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:13:06 UTC | Wed, 10 Mar 2021 01:13:07 UTC |
	|            | test-preload-20210310011103-1084876         |                                             |         |         |                               |                               |
	|            | -- docker images                            |                                             |         |         |                               |                               |
	| delete     | -p                                          | test-preload-20210310011103-1084876         | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:13:07 UTC | Wed, 10 Mar 2021 01:13:10 UTC |
	|            | test-preload-20210310011103-1084876         |                                             |         |         |                               |                               |
	| start      | -p                                          | scheduled-stop-20210310011310-1084876       | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:13:10 UTC | Wed, 10 Mar 2021 01:13:39 UTC |
	|            | scheduled-stop-20210310011310-1084876       |                                             |         |         |                               |                               |
	|            | --memory=1900 --driver=docker               |                                             |         |         |                               |                               |
	|            | --container-runtime=docker                  |                                             |         |         |                               |                               |
	| stop       | -p                                          | scheduled-stop-20210310011310-1084876       | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:13:40 UTC | Wed, 10 Mar 2021 01:13:40 UTC |
	|            | scheduled-stop-20210310011310-1084876       |                                             |         |         |                               |                               |
	|            | --cancel-scheduled                          |                                             |         |         |                               |                               |
	| stop       | -p                                          | scheduled-stop-20210310011310-1084876       | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:13:53 UTC | Wed, 10 Mar 2021 01:14:09 UTC |
	|            | scheduled-stop-20210310011310-1084876       |                                             |         |         |                               |                               |
	|            | --schedule 5s                               |                                             |         |         |                               |                               |
	| delete     | -p                                          | scheduled-stop-20210310011310-1084876       | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:14:11 UTC | Wed, 10 Mar 2021 01:14:13 UTC |
	|            | scheduled-stop-20210310011310-1084876       |                                             |         |         |                               |                               |
	| start      | -p                                          | skaffold-20210310011413-1084876             | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:14:14 UTC | Wed, 10 Mar 2021 01:14:43 UTC |
	|            | skaffold-20210310011413-1084876             |                                             |         |         |                               |                               |
	|            | --memory=2600 --driver=docker               |                                             |         |         |                               |                               |
	|            | --container-runtime=docker                  |                                             |         |         |                               |                               |
	| docker-env | --shell none -p                             | skaffold-20210310011413-1084876             | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:14:44 UTC | Wed, 10 Mar 2021 01:14:44 UTC |
	|            | skaffold-20210310011413-1084876             |                                             |         |         |                               |                               |
	| delete     | -p                                          | skaffold-20210310011413-1084876             | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:15:29 UTC | Wed, 10 Mar 2021 01:15:32 UTC |
	|            | skaffold-20210310011413-1084876             |                                             |         |         |                               |                               |
	| delete     | -p                                          | insufficient-storage-20210310011532-1084876 | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:15:41 UTC | Wed, 10 Mar 2021 01:15:43 UTC |
	|            | insufficient-storage-20210310011532-1084876 |                                             |         |         |                               |                               |
	| start      | -p                                          | kubernetes-upgrade-20210310011543-1084876   | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:15:43 UTC | Wed, 10 Mar 2021 01:16:42 UTC |
	|            | kubernetes-upgrade-20210310011543-1084876   |                                             |         |         |                               |                               |
	|            | --memory=2200                               |                                             |         |         |                               |                               |
	|            | --kubernetes-version=v1.14.0                |                                             |         |         |                               |                               |
	|            | --alsologtostderr -v=1 --driver=docker      |                                             |         |         |                               |                               |
	|            | --container-runtime=docker                  |                                             |         |         |                               |                               |
	| start      | -p                                          | nospam-20210310011543-1084876               | jenkins | v1.18.1 | Wed, 10 Mar 2021 01:15:43 UTC | Wed, 10 Mar 2021 01:16:50 UTC |
	|            | nospam-20210310011543-1084876               |                                             |         |         |                               |                               |
	|            | -n=1 --memory=2250                          |                                             |         |         |                               |                               |
	|            | --wait=false --driver=docker                |                                             |         |         |                               |                               |
	|            | --container-runtime=docker                  |                                             |         |         |                               |                               |
	|------------|---------------------------------------------|---------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 01:15:43
	* Running on machine: debian-jenkins-agent-14
	* Binary: Built with gc go1.16 for linux/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 01:15:43.599206 1227416 out.go:239] Setting OutFile to fd 1 ...
	* I0310 01:15:43.599296 1227416 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 01:15:43.599309 1227416 out.go:252] Setting ErrFile to fd 2...
	* I0310 01:15:43.599316 1227416 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 01:15:43.599452 1227416 root.go:308] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/bin
	* I0310 01:15:43.599824 1227416 out.go:246] Setting JSON to false
	* I0310 01:15:43.647665 1227416 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":17904,"bootTime":1615321039,"procs":177,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	* I0310 01:15:43.647852 1227416 start.go:118] virtualization: kvm guest
	* I0310 01:15:43.651439 1227416 out.go:129] * [offline-docker-20210310011543-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)
	* I0310 01:15:43.605973 1227418 out.go:239] Setting OutFile to fd 1 ...
	* I0310 01:15:43.606047 1227418 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 01:15:43.606051 1227418 out.go:252] Setting ErrFile to fd 2...
	* I0310 01:15:43.606055 1227418 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 01:15:43.606160 1227418 root.go:308] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/bin
	* I0310 01:15:43.606400 1227418 out.go:246] Setting JSON to false
	* I0310 01:15:43.651053 1227418 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":17904,"bootTime":1615321039,"procs":176,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	* I0310 01:15:43.651193 1227418 start.go:118] virtualization: kvm guest
	* I0310 01:15:43.654246 1227416 out.go:129]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	* I0310 01:15:43.651691 1227416 notify.go:126] Checking for updates...
	* I0310 01:15:43.654219 1227418 out.go:129] * [pause-20210310011543-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)
	* I0310 01:15:43.598084 1227419 out.go:239] Setting OutFile to fd 1 ...
	* I0310 01:15:43.598217 1227419 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 01:15:43.598234 1227419 out.go:252] Setting ErrFile to fd 2...
	* I0310 01:15:43.598240 1227419 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 01:15:43.598411 1227419 root.go:308] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/bin
	* I0310 01:15:43.598841 1227419 out.go:246] Setting JSON to false
	* I0310 01:15:43.654098 1227419 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":17904,"bootTime":1615321039,"procs":174,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	* I0310 01:15:43.654217 1227419 start.go:118] virtualization: kvm guest
	* I0310 01:15:43.598626 1227420 out.go:239] Setting OutFile to fd 1 ...
	* I0310 01:15:43.598771 1227420 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 01:15:43.598776 1227420 out.go:252] Setting ErrFile to fd 2...
	* I0310 01:15:43.598781 1227420 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 01:15:43.598938 1227420 root.go:308] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/bin
	* I0310 01:15:43.599283 1227420 out.go:246] Setting JSON to false
	* I0310 01:15:43.654254 1227420 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":17904,"bootTime":1615321039,"procs":174,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	* I0310 01:15:43.654384 1227420 start.go:118] virtualization: kvm guest
	* I0310 01:15:43.657520 1227418 out.go:129]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	* I0310 01:15:43.657489 1227416 out.go:129]   - MINIKUBE_BIN=out/minikube-linux-amd64
	* I0310 01:15:43.657527 1227419 out.go:129] * [kubernetes-upgrade-20210310011543-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)
	* I0310 01:15:43.658929 1227420 out.go:129] * [nospam-20210310011543-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)
	* I0310 01:15:43.660174 1227416 out.go:129]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube
	* I0310 01:15:43.654432 1227418 notify.go:126] Checking for updates...
	* I0310 01:15:43.660208 1227418 out.go:129]   - MINIKUBE_BIN=out/minikube-linux-amd64
	* I0310 01:15:43.662617 1227420 out.go:129]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	* I0310 01:15:43.659132 1227420 notify.go:126] Checking for updates...
	* I0310 01:15:43.662639 1227418 out.go:129]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube
	* I0310 01:15:43.665149 1227420 out.go:129]   - MINIKUBE_BIN=out/minikube-linux-amd64
	* I0310 01:15:43.665581 1227419 out.go:129]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	* I0310 01:15:43.667504 1227420 out.go:129]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube
	* I0310 01:15:43.657768 1227419 notify.go:126] Checking for updates...
	* I0310 01:15:43.667534 1227419 out.go:129]   - MINIKUBE_BIN=out/minikube-linux-amd64
	* I0310 01:15:43.669664 1227419 out.go:129]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube
	* I0310 01:15:43.663103 1227416 out.go:129]   - MINIKUBE_LOCATION=10730
	* I0310 01:15:43.663481 1227416 driver.go:317] Setting default libvirt URI to qemu:///system
	* I0310 01:15:43.746869 1227416 docker.go:119] docker version: linux-19.03.15
	* I0310 01:15:43.746959 1227416 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 01:15:43.864257 1227416 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:50 SystemTime:2021-03-10 01:15:43.796412942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	* I0310 01:15:43.864415 1227416 docker.go:216] overlay module found
	* I0310 01:15:43.671646 1227419 out.go:129]   - MINIKUBE_LOCATION=10730
	* I0310 01:15:43.671948 1227419 driver.go:317] Setting default libvirt URI to qemu:///system
	* I0310 01:15:43.749670 1227419 docker.go:119] docker version: linux-19.03.15
	* I0310 01:15:43.749770 1227419 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 01:15:43.865058 1227419 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:50 SystemTime:2021-03-10 01:15:43.796546795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	* I0310 01:15:43.865191 1227419 docker.go:216] overlay module found
	* I0310 01:15:43.867329 1227419 out.go:129] * Using the docker driver based on user configuration
	* I0310 01:15:43.867357 1227419 start.go:276] selected driver: docker
	* I0310 01:15:43.867364 1227419 start.go:718] validating driver "docker" against <nil>
	* I0310 01:15:43.867386 1227419 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* W0310 01:15:43.867434 1227419 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	* W0310 01:15:43.867540 1227419 out.go:191] ! Your cgroup does not allow setting memory.
	* I0310 01:15:43.867322 1227416 out.go:129] * Using the docker driver based on user configuration
	* I0310 01:15:43.867358 1227416 start.go:276] selected driver: docker
	* I0310 01:15:43.867366 1227416 start.go:718] validating driver "docker" against <nil>
	* I0310 01:15:43.867387 1227416 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* W0310 01:15:43.867434 1227416 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	* W0310 01:15:43.867544 1227416 out.go:191] ! Your cgroup does not allow setting memory.
	* I0310 01:15:43.664848 1227418 out.go:129]   - MINIKUBE_LOCATION=10730
	* I0310 01:15:43.665174 1227418 driver.go:317] Setting default libvirt URI to qemu:///system
	* I0310 01:15:43.746373 1227418 docker.go:119] docker version: linux-19.03.15
	* I0310 01:15:43.746570 1227418 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 01:15:43.869421 1227418 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:56 SystemTime:2021-03-10 01:15:43.802555704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	* I0310 01:15:43.869560 1227418 docker.go:216] overlay module found
	* I0310 01:15:43.872086 1227418 out.go:129] * Using the docker driver based on user configuration
	* I0310 01:15:43.872122 1227418 start.go:276] selected driver: docker
	* I0310 01:15:43.872128 1227418 start.go:718] validating driver "docker" against <nil>
	* I0310 01:15:43.872148 1227418 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* W0310 01:15:43.872209 1227418 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	* W0310 01:15:43.872301 1227418 out.go:191] ! Your cgroup does not allow setting memory.
	* I0310 01:15:43.874564 1227418 out.go:129]   - More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* I0310 01:15:43.877161 1227418 out.go:129] 
	* W0310 01:15:43.877272 1227418 out.go:191] X Requested memory allocation (1800MB) is less than the recommended minimum 1900MB. Deployments may fail.
	* I0310 01:15:43.669608 1227420 out.go:129]   - MINIKUBE_LOCATION=10730
	* I0310 01:15:43.669987 1227420 driver.go:317] Setting default libvirt URI to qemu:///system
	* I0310 01:15:43.766472 1227420 docker.go:119] docker version: linux-19.03.15
	* I0310 01:15:43.766577 1227420 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 01:15:43.887748 1227420 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:51 SystemTime:2021-03-10 01:15:43.817450842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	* I0310 01:15:43.887920 1227420 docker.go:216] overlay module found
	* I0310 01:15:43.890490 1227420 out.go:129] * Using the docker driver based on user configuration
	* I0310 01:15:43.890520 1227420 start.go:276] selected driver: docker
	* I0310 01:15:43.890526 1227420 start.go:718] validating driver "docker" against <nil>
	* I0310 01:15:43.890543 1227420 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* W0310 01:15:43.890583 1227420 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	* W0310 01:15:43.890673 1227420 out.go:191] ! Your cgroup does not allow setting memory.
	* I0310 01:15:43.869914 1227419 out.go:129]   - More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* I0310 01:15:43.870577 1227419 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 01:15:43.988676 1227419 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:48 SystemTime:2021-03-10 01:15:43.91930796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	* I0310 01:15:43.988809 1227419 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 01:15:43.989009 1227419 start_flags.go:699] Wait components to verify : map[apiserver:true system_pods:true]
	* I0310 01:15:43.989043 1227419 cni.go:74] Creating CNI manager for ""
	* I0310 01:15:43.989050 1227419 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 01:15:43.989056 1227419 start_flags.go:398] config:
	* {Name:kubernetes-upgrade-20210310011543-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210310011543-1084876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 01:15:43.869924 1227416 out.go:129]   - More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* I0310 01:15:43.870715 1227416 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 01:15:43.990317 1227416 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2021-03-10 01:15:43.923546015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	* I0310 01:15:43.990435 1227416 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 01:15:43.990655 1227416 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 01:15:43.990702 1227416 cni.go:74] Creating CNI manager for ""
	* I0310 01:15:43.990715 1227416 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 01:15:43.990730 1227416 start_flags.go:398] config:
	* {Name:offline-docker-20210310011543-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:offline-docker-20210310011543-1084876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 01:15:43.879163 1227418 out.go:129] 
	* I0310 01:15:43.879260 1227418 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 01:15:43.993082 1227418 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:56 SystemTime:2021-03-10 01:15:43.929448752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	* I0310 01:15:43.993243 1227418 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 01:15:43.993474 1227418 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 01:15:43.993512 1227418 cni.go:74] Creating CNI manager for ""
	* I0310 01:15:43.993523 1227418 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 01:15:43.993530 1227418 start_flags.go:398] config:
	* {Name:pause-20210310011543-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:pause-20210310011543-1084876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 01:15:43.892637 1227420 out.go:129]   - More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* I0310 01:15:43.893399 1227420 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 01:15:44.010239 1227420 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:51 SystemTime:2021-03-10 01:15:43.944713304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAdd
ress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	* I0310 01:15:44.010349 1227420 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 01:15:44.010562 1227420 start_flags.go:712] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	* I0310 01:15:44.010598 1227420 cni.go:74] Creating CNI manager for ""
	* I0310 01:15:44.010607 1227420 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 01:15:44.010612 1227420 start_flags.go:398] config:
	* {Name:nospam-20210310011543-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2250 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:nospam-20210310011543-1084876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 01:15:43.996239 1227418 out.go:129] * Starting control plane node pause-20210310011543-1084876 in cluster pause-20210310011543-1084876
	* I0310 01:15:44.082643 1227418 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 01:15:44.082665 1227418 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 01:15:44.082676 1227418 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 01:15:44.082728 1227418 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 01:15:44.082736 1227418 cache.go:54] Caching tarball of preloaded images
	* I0310 01:15:44.082760 1227418 preload.go:131] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 01:15:44.082765 1227418 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 01:15:44.083270 1227418 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/pause-20210310011543-1084876/config.json ...
	* I0310 01:15:44.083305 1227418 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/pause-20210310011543-1084876/config.json: {Name:mk09753c2c4b1e384f1cc82853613719e571c441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 01:15:44.083654 1227418 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 01:15:44.083684 1227418 start.go:313] acquiring machines lock for pause-20210310011543-1084876: {Name:mkfc40ee9cca690bec4d482e53406ebcb9d862f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 01:15:44.083760 1227418 start.go:317] acquired machines lock for "pause-20210310011543-1084876" in 60.557µs
	* I0310 01:15:44.083781 1227418 start.go:89] Provisioning new machine with config: &{Name:pause-20210310011543-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:pause-20210310011543-1084876 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 01:15:44.083865 1227418 start.go:126] createHost starting for "" (driver="docker")
	* I0310 01:15:43.992189 1227419 out.go:129] * Starting control plane node kubernetes-upgrade-20210310011543-1084876 in cluster kubernetes-upgrade-20210310011543-1084876
	* I0310 01:15:44.087463 1227419 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 01:15:44.087490 1227419 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 01:15:44.087501 1227419 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	* I0310 01:15:44.087551 1227419 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	* I0310 01:15:44.087562 1227419 cache.go:54] Caching tarball of preloaded images
	* I0310 01:15:44.087590 1227419 preload.go:131] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 01:15:44.087614 1227419 cache.go:57] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	* I0310 01:15:44.088095 1227419 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/kubernetes-upgrade-20210310011543-1084876/config.json ...
	* I0310 01:15:44.088144 1227419 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/kubernetes-upgrade-20210310011543-1084876/config.json: {Name:mk960dc23e81bf961af42f0448a54f3f314b3aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 01:15:44.088499 1227419 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 01:15:44.088539 1227419 start.go:313] acquiring machines lock for kubernetes-upgrade-20210310011543-1084876: {Name:mkbd2e0bb613043c014f4bfc41f3179838369689 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 01:15:44.088652 1227419 start.go:317] acquired machines lock for "kubernetes-upgrade-20210310011543-1084876" in 84.443µs
	* I0310 01:15:44.088685 1227419 start.go:89] Provisioning new machine with config: &{Name:kubernetes-upgrade-20210310011543-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210310011543-1084876 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	* I0310 01:15:44.088781 1227419 start.go:126] createHost starting for "" (driver="docker")
	* I0310 01:15:43.993735 1227416 out.go:129] * Starting control plane node offline-docker-20210310011543-1084876 in cluster offline-docker-20210310011543-1084876
	* I0310 01:15:44.095234 1227416 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 01:15:44.095261 1227416 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 01:15:44.095272 1227416 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 01:15:44.095311 1227416 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 01:15:44.095333 1227416 cache.go:54] Caching tarball of preloaded images
	* I0310 01:15:44.095365 1227416 preload.go:131] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 01:15:44.095384 1227416 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 01:15:44.095837 1227416 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/offline-docker-20210310011543-1084876/config.json ...
	* I0310 01:15:44.095878 1227416 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/offline-docker-20210310011543-1084876/config.json: {Name:mkd23a9a5ca04316e6f20da56496886afed1409f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 01:15:44.096134 1227416 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 01:15:44.096169 1227416 start.go:313] acquiring machines lock for offline-docker-20210310011543-1084876: {Name:mke8f83440daa62a4117dfdfc11c9759e7e786d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 01:15:44.096249 1227416 start.go:317] acquired machines lock for "offline-docker-20210310011543-1084876" in 59.743µs
	* I0310 01:15:44.096276 1227416 start.go:89] Provisioning new machine with config: &{Name:offline-docker-20210310011543-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:offline-docker-20210310011543-1084876 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 01:15:44.096370 1227416 start.go:126] createHost starting for "" (driver="docker")
	* I0310 01:15:44.013840 1227420 out.go:129] * Starting control plane node nospam-20210310011543-1084876 in cluster nospam-20210310011543-1084876
	* I0310 01:15:44.108035 1227420 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 01:15:44.108051 1227420 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 01:15:44.108062 1227420 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 01:15:44.108107 1227420 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 01:15:44.108114 1227420 cache.go:54] Caching tarball of preloaded images
	* I0310 01:15:44.108129 1227420 preload.go:131] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 01:15:44.108136 1227420 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 01:15:44.108637 1227420 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/nospam-20210310011543-1084876/config.json ...
	* I0310 01:15:44.108727 1227420 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/nospam-20210310011543-1084876/config.json: {Name:mk9b19c1c0152d662a33c0a51fd329dd02e6bf34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 01:15:44.109043 1227420 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 01:15:44.109077 1227420 start.go:313] acquiring machines lock for nospam-20210310011543-1084876: {Name:mka5eb6acd7c8f343f48e24270ad288716293ec8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 01:15:44.109173 1227420 start.go:317] acquired machines lock for "nospam-20210310011543-1084876" in 79.537µs
	* I0310 01:15:44.109195 1227420 start.go:89] Provisioning new machine with config: &{Name:nospam-20210310011543-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2250 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:nospam-20210310011543-1084876 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 01:15:44.109280 1227420 start.go:126] createHost starting for "" (driver="docker")
	* I0310 01:15:44.091715 1227419 out.go:150] * Creating docker container (CPUs=2, Memory=2200MB) ...
	* I0310 01:15:44.092011 1227419 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20210310011543-1084876" (driver="docker")
	* I0310 01:15:44.092052 1227419 client.go:168] LocalClient.Create starting
	* I0310 01:15:44.092186 1227419 main.go:121] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca.pem
	* I0310 01:15:44.092260 1227419 main.go:121] libmachine: Decoding PEM data...
	* I0310 01:15:44.092283 1227419 main.go:121] libmachine: Parsing certificate...
	* I0310 01:15:44.092432 1227419 main.go:121] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/cert.pem
	* I0310 01:15:44.092488 1227419 main.go:121] libmachine: Decoding PEM data...
	* I0310 01:15:44.092515 1227419 main.go:121] libmachine: Parsing certificate...
	* I0310 01:15:44.092972 1227419 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 01:15:44.149219 1227419 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 01:15:44.149303 1227419 network_create.go:240] running [docker network inspect kubernetes-upgrade-20210310011543-1084876] to gather additional debugging logs...
	* I0310 01:15:44.149321 1227419 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210310011543-1084876
	* W0310 01:15:44.206228 1227419 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210310011543-1084876 returned with exit code 1
	* I0310 01:15:44.206266 1227419 network_create.go:243] error running [docker network inspect kubernetes-upgrade-20210310011543-1084876]: docker network inspect kubernetes-upgrade-20210310011543-1084876: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: kubernetes-upgrade-20210310011543-1084876
	* I0310 01:15:44.206303 1227419 network_create.go:245] output of [docker network inspect kubernetes-upgrade-20210310011543-1084876]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: kubernetes-upgrade-20210310011543-1084876
	* 
	* ** /stderr **
	* I0310 01:15:44.206373 1227419 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 01:15:44.266017 1227419 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 01:15:44.266094 1227419 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: kubernetes-upgrade-20210310011543-1084876 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 01:15:44.266169 1227419 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20210310011543-1084876
	* W0310 01:15:44.328331 1227419 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20210310011543-1084876 returned with exit code 1
	* W0310 01:15:44.328520 1227419 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	* I0310 01:15:44.328607 1227419 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 01:15:44.384610 1227419 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20210310011543-1084876 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210310011543-1084876 --label created_by.minikube.sigs.k8s.io=true
	* I0310 01:15:44.436840 1227419 oci.go:102] Successfully created a docker volume kubernetes-upgrade-20210310011543-1084876
	* I0310 01:15:44.436929 1227419 cli_runner.go:115] Run: docker run --rm --name kubernetes-upgrade-20210310011543-1084876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210310011543-1084876 --entrypoint /usr/bin/test -v kubernetes-upgrade-20210310011543-1084876:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 01:15:45.914655 1227419 cli_runner.go:168] Completed: docker run --rm --name kubernetes-upgrade-20210310011543-1084876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210310011543-1084876 --entrypoint /usr/bin/test -v kubernetes-upgrade-20210310011543-1084876:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (1.477514671s)
	* I0310 01:15:45.914690 1227419 oci.go:106] Successfully prepared a docker volume kubernetes-upgrade-20210310011543-1084876
	* W0310 01:15:45.914730 1227419 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	* W0310 01:15:45.914742 1227419 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	* I0310 01:15:45.914754 1227419 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	* I0310 01:15:45.914801 1227419 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 01:15:45.914810 1227419 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	* I0310 01:15:45.914826 1227419 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 01:15:45.914881 1227419 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20210310011543-1084876:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* I0310 01:15:46.026054 1227419 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20210310011543-1084876 --name kubernetes-upgrade-20210310011543-1084876 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210310011543-1084876 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20210310011543-1084876 --volume kubernetes-upgrade-20210310011543-1084876:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 01:15:47.002954 1227419 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310011543-1084876 --format=
	* I0310 01:15:47.087944 1227419 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310011543-1084876 --format=
	* I0310 01:15:47.183978 1227419 cli_runner.go:115] Run: docker exec kubernetes-upgrade-20210310011543-1084876 stat /var/lib/dpkg/alternatives/iptables
	* I0310 01:15:47.409911 1227419 oci.go:278] the created container "kubernetes-upgrade-20210310011543-1084876" has a running status.
	* I0310 01:15:47.409955 1227419 kic.go:206] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/kubernetes-upgrade-20210310011543-1084876/id_rsa...
	* I0310 01:15:47.776588 1227419 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/kubernetes-upgrade-20210310011543-1084876/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 01:15:48.429387 1227419 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310011543-1084876 --format=
	* I0310 01:15:44.099357 1227416 out.go:150] * Creating docker container (CPUs=2, Memory=2000MB) ...
	* I0310 01:15:44.099677 1227416 start.go:160] libmachine.API.Create for "offline-docker-20210310011543-1084876" (driver="docker")
	* I0310 01:15:44.099717 1227416 client.go:168] LocalClient.Create starting
	* I0310 01:15:44.099782 1227416 main.go:121] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca.pem
	* I0310 01:15:44.099827 1227416 main.go:121] libmachine: Decoding PEM data...
	* I0310 01:15:44.099854 1227416 main.go:121] libmachine: Parsing certificate...
	* I0310 01:15:44.100005 1227416 main.go:121] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/cert.pem
	* I0310 01:15:44.100032 1227416 main.go:121] libmachine: Decoding PEM data...
	* I0310 01:15:44.100050 1227416 main.go:121] libmachine: Parsing certificate...
	* I0310 01:15:44.100529 1227416 cli_runner.go:115] Run: docker network inspect offline-docker-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 01:15:44.157335 1227416 cli_runner.go:162] docker network inspect offline-docker-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 01:15:44.157440 1227416 network_create.go:240] running [docker network inspect offline-docker-20210310011543-1084876] to gather additional debugging logs...
	* I0310 01:15:44.157474 1227416 cli_runner.go:115] Run: docker network inspect offline-docker-20210310011543-1084876
	* W0310 01:15:44.219640 1227416 cli_runner.go:162] docker network inspect offline-docker-20210310011543-1084876 returned with exit code 1
	* I0310 01:15:44.219682 1227416 network_create.go:243] error running [docker network inspect offline-docker-20210310011543-1084876]: docker network inspect offline-docker-20210310011543-1084876: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: offline-docker-20210310011543-1084876
	* I0310 01:15:44.219701 1227416 network_create.go:245] output of [docker network inspect offline-docker-20210310011543-1084876]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: offline-docker-20210310011543-1084876
	* 
	* ** /stderr **
	* I0310 01:15:44.219779 1227416 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 01:15:44.277902 1227416 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 01:15:44.277977 1227416 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: offline-docker-20210310011543-1084876 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 01:15:44.278043 1227416 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20210310011543-1084876
	* W0310 01:15:44.329999 1227416 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20210310011543-1084876 returned with exit code 1
	* W0310 01:15:44.330197 1227416 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	* I0310 01:15:44.330284 1227416 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 01:15:44.389134 1227416 cli_runner.go:115] Run: docker volume create offline-docker-20210310011543-1084876 --label name.minikube.sigs.k8s.io=offline-docker-20210310011543-1084876 --label created_by.minikube.sigs.k8s.io=true
	* I0310 01:15:44.447832 1227416 oci.go:102] Successfully created a docker volume offline-docker-20210310011543-1084876
	* I0310 01:15:44.447937 1227416 cli_runner.go:115] Run: docker run --rm --name offline-docker-20210310011543-1084876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20210310011543-1084876 --entrypoint /usr/bin/test -v offline-docker-20210310011543-1084876:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 01:15:46.030968 1227416 cli_runner.go:168] Completed: docker run --rm --name offline-docker-20210310011543-1084876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20210310011543-1084876 --entrypoint /usr/bin/test -v offline-docker-20210310011543-1084876:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (1.582903607s)
	* I0310 01:15:46.031011 1227416 oci.go:106] Successfully prepared a docker volume offline-docker-20210310011543-1084876
	* W0310 01:15:46.031054 1227416 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	* W0310 01:15:46.031066 1227416 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	* I0310 01:15:46.031069 1227416 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 01:15:46.031135 1227416 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 01:15:46.031138 1227416 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 01:15:46.031151 1227416 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 01:15:46.031214 1227416 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20210310011543-1084876:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* I0310 01:15:46.174031 1227416 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-docker-20210310011543-1084876 --name offline-docker-20210310011543-1084876 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20210310011543-1084876 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-docker-20210310011543-1084876 --volume offline-docker-20210310011543-1084876:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 01:15:48.293419 1227416 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-docker-20210310011543-1084876 --name offline-docker-20210310011543-1084876 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20210310011543-1084876 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-docker-20210310011543-1084876 --volume offline-docker-20210310011543-1084876:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (2.119294492s)
	* I0310 01:15:48.293504 1227416 cli_runner.go:115] Run: docker container inspect offline-docker-20210310011543-1084876 --format=
	* I0310 01:15:48.380016 1227416 cli_runner.go:115] Run: docker container inspect offline-docker-20210310011543-1084876 --format=
	* I0310 01:15:48.472629 1227416 cli_runner.go:115] Run: docker exec offline-docker-20210310011543-1084876 stat /var/lib/dpkg/alternatives/iptables
	* I0310 01:15:44.112296 1227420 out.go:150] * Creating docker container (CPUs=2, Memory=2250MB) ...
	* I0310 01:15:44.112631 1227420 start.go:160] libmachine.API.Create for "nospam-20210310011543-1084876" (driver="docker")
	* I0310 01:15:44.112661 1227420 client.go:168] LocalClient.Create starting
	* I0310 01:15:44.112760 1227420 main.go:121] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/ca.pem
	* I0310 01:15:44.112788 1227420 main.go:121] libmachine: Decoding PEM data...
	* I0310 01:15:44.112803 1227420 main.go:121] libmachine: Parsing certificate...
	* I0310 01:15:44.112939 1227420 main.go:121] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/certs/cert.pem
	* I0310 01:15:44.112966 1227420 main.go:121] libmachine: Decoding PEM data...
	* I0310 01:15:44.112976 1227420 main.go:121] libmachine: Parsing certificate...
	* I0310 01:15:44.113363 1227420 cli_runner.go:115] Run: docker network inspect nospam-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 01:15:44.165577 1227420 cli_runner.go:162] docker network inspect nospam-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 01:15:44.165651 1227420 network_create.go:240] running [docker network inspect nospam-20210310011543-1084876] to gather additional debugging logs...
	* I0310 01:15:44.165669 1227420 cli_runner.go:115] Run: docker network inspect nospam-20210310011543-1084876
	* W0310 01:15:44.217464 1227420 cli_runner.go:162] docker network inspect nospam-20210310011543-1084876 returned with exit code 1
	* I0310 01:15:44.217500 1227420 network_create.go:243] error running [docker network inspect nospam-20210310011543-1084876]: docker network inspect nospam-20210310011543-1084876: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: nospam-20210310011543-1084876
	* I0310 01:15:44.217527 1227420 network_create.go:245] output of [docker network inspect nospam-20210310011543-1084876]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: nospam-20210310011543-1084876
	* 
	* ** /stderr **
	* I0310 01:15:44.217582 1227420 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 01:15:44.274347 1227420 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 01:15:44.274393 1227420 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: nospam-20210310011543-1084876 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 01:15:44.274459 1227420 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20210310011543-1084876
	* W0310 01:15:44.328132 1227420 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true nospam-20210310011543-1084876 returned with exit code 1
	* W0310 01:15:44.328343 1227420 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	* I0310 01:15:44.328406 1227420 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 01:15:44.386330 1227420 cli_runner.go:115] Run: docker volume create nospam-20210310011543-1084876 --label name.minikube.sigs.k8s.io=nospam-20210310011543-1084876 --label created_by.minikube.sigs.k8s.io=true
	* I0310 01:15:44.448016 1227420 oci.go:102] Successfully created a docker volume nospam-20210310011543-1084876
	* I0310 01:15:44.448092 1227420 cli_runner.go:115] Run: docker run --rm --name nospam-20210310011543-1084876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=nospam-20210310011543-1084876 --entrypoint /usr/bin/test -v nospam-20210310011543-1084876:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 01:15:45.553487 1227420 cli_runner.go:168] Completed: docker run --rm --name nospam-20210310011543-1084876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=nospam-20210310011543-1084876 --entrypoint /usr/bin/test -v nospam-20210310011543-1084876:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (1.105318602s)
	* I0310 01:15:45.553523 1227420 oci.go:106] Successfully prepared a docker volume nospam-20210310011543-1084876
	* W0310 01:15:45.553557 1227420 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	* W0310 01:15:45.553564 1227420 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	* I0310 01:15:45.553581 1227420 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 01:15:45.553628 1227420 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 01:15:45.553633 1227420 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 01:15:45.553643 1227420 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 01:15:45.553697 1227420 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v nospam-20210310011543-1084876:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* I0310 01:15:45.654518 1227420 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname nospam-20210310011543-1084876 --name nospam-20210310011543-1084876 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=nospam-20210310011543-1084876 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=nospam-20210310011543-1084876 --volume nospam-20210310011543-1084876:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 01:15:47.002343 1227420 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname nospam-20210310011543-1084876 --name nospam-20210310011543-1084876 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=nospam-20210310011543-1084876 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=nospam-20210310011543-1084876 --volume nospam-20210310011543-1084876:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (1.347731328s)
	* I0310 01:15:47.002429 1227420 cli_runner.go:115] Run: docker container inspect nospam-2021031001154

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 01:16:53.032944 1246808 out.go:335] unable to parse "* I0310 01:15:43.746959 1227416 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 01:15:43.746959 1227416 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 01:16:53.050617 1246808 out.go:335] unable to parse "* I0310 01:15:43.749770 1227419 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 01:15:43.749770 1227419 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 01:16:53.091429 1246808 out.go:335] unable to parse "* I0310 01:15:43.746570 1227418 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 01:15:43.746570 1227418 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 01:16:53.130227 1246808 out.go:335] unable to parse "* I0310 01:15:43.766577 1227420 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 01:15:43.766577 1227420 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 01:16:53.156744 1246808 out.go:335] unable to parse "* I0310 01:15:43.870577 1227419 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 01:15:43.870577 1227419 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 01:16:53.179766 1246808 out.go:335] unable to parse "* I0310 01:15:43.870715 1227416 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 01:15:43.870715 1227416 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 01:16:53.202908 1246808 out.go:335] unable to parse "* I0310 01:15:43.879260 1227418 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 01:15:43.879260 1227418 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 01:16:53.225413 1246808 out.go:335] unable to parse "* I0310 01:15:43.893399 1227420 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 01:15:43.893399 1227420 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 01:16:53.385115 1246808 out.go:340] unable to execute * I0310 01:15:44.092972 1227419 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 01:15:44.092972 1227419 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:297: executing "* I0310 01:15:44.092972 1227419 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210310011543-1084876 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 01:16:53.391558 1246808 out.go:340] unable to execute * W0310 01:15:44.149219 1227419 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 01:15:44.149219 1227419 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:292: executing "* W0310 01:15:44.149219 1227419 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210310011543-1084876 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 01:16:53.429471 1246808 out.go:340] unable to execute * I0310 01:15:44.206373 1227419 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 01:15:44.206373 1227419 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 01:15:44.206373 1227419 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 01:16:53.466245 1246808 out.go:335] unable to parse "* I0310 01:15:45.914801 1227419 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 01:15:45.914801 1227419 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 01:16:53.513631 1246808 out.go:340] unable to execute * I0310 01:15:44.100529 1227416 cli_runner.go:115] Run: docker network inspect offline-docker-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 01:15:44.100529 1227416 cli_runner.go:115] Run: docker network inspect offline-docker-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:293: executing "* I0310 01:15:44.100529 1227416 cli_runner.go:115] Run: docker network inspect offline-docker-20210310011543-1084876 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 01:16:53.519171 1246808 out.go:340] unable to execute * W0310 01:15:44.157335 1227416 cli_runner.go:162] docker network inspect offline-docker-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 01:15:44.157335 1227416 cli_runner.go:162] docker network inspect offline-docker-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:288: executing "* W0310 01:15:44.157335 1227416 cli_runner.go:162] docker network inspect offline-docker-20210310011543-1084876 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 01:16:53.560065 1246808 out.go:340] unable to execute * I0310 01:15:44.219779 1227416 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 01:15:44.219779 1227416 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 01:15:44.219779 1227416 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 01:16:53.594861 1246808 out.go:335] unable to parse "* I0310 01:15:46.031135 1227416 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 01:15:46.031135 1227416 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 01:16:53.638440 1246808 out.go:340] unable to execute * I0310 01:15:44.113363 1227420 cli_runner.go:115] Run: docker network inspect nospam-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 01:15:44.113363 1227420 cli_runner.go:115] Run: docker network inspect nospam-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:285: executing "* I0310 01:15:44.113363 1227420 cli_runner.go:115] Run: docker network inspect nospam-20210310011543-1084876 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 01:16:53.644054 1246808 out.go:340] unable to execute * W0310 01:15:44.165577 1227420 cli_runner.go:162] docker network inspect nospam-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 01:15:44.165577 1227420 cli_runner.go:162] docker network inspect nospam-20210310011543-1084876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:280: executing "* W0310 01:15:44.165577 1227420 cli_runner.go:162] docker network inspect nospam-20210310011543-1084876 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 01:16:53.688890 1246808 out.go:340] unable to execute * I0310 01:15:44.217582 1227420 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 01:15:44.217582 1227420 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 01:15:44.217582 1227420 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 01:16:53.730515 1246808 out.go:335] unable to parse "* I0310 01:15:45.553643 1227420 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 01:15:45.553643 1227420 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p nospam-20210310011543-1084876 -n nospam-20210310011543-1084876
helpers_test.go:257: (dbg) Run:  kubectl --context nospam-20210310011543-1084876 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: storage-provisioner
helpers_test.go:265: ======> post-mortem[TestErrorSpam]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context nospam-20210310011543-1084876 describe pod storage-provisioner
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context nospam-20210310011543-1084876 describe pod storage-provisioner: exit status 1 (94.953756ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context nospam-20210310011543-1084876 describe pod storage-provisioner: exit status 1
helpers_test.go:171: Cleaning up "nospam-20210310011543-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p nospam-20210310011543-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p nospam-20210310011543-1084876: (3.359912862s)
--- FAIL: TestErrorSpam (74.48s)

                                                
                                    

Test pass (222/241)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 7.37
4 TestDownloadOnly/v1.14.0/preload-exists 0
6 TestDownloadOnly/v1.14.0/binaries 0
9 TestDownloadOnly/v1.20.2/json-events 7.14
10 TestDownloadOnly/v1.20.2/preload-exists 0
12 TestDownloadOnly/v1.20.2/binaries 0
15 TestDownloadOnly/v1.20.5-rc.0/json-events 6.38
16 TestDownloadOnly/v1.20.5-rc.0/preload-exists 0
18 TestDownloadOnly/v1.20.5-rc.0/binaries 0
20 TestDownloadOnly/DeleteAll 1.79
21 TestDownloadOnly/DeleteAlwaysSucceeds 0.73
22 TestDownloadOnlyKic 11.25
23 TestOffline 146.85
26 TestAddons/parallel/Registry 13.93
27 TestAddons/parallel/Ingress 15.15
28 TestAddons/parallel/MetricsServer 47.97
29 TestAddons/parallel/HelmTiller 9.66
31 TestAddons/parallel/CSI 191
33 TestCertOptions 53.94
34 TestDockerFlags 56.06
35 TestForceSystemdFlag 42.68
36 TestForceSystemdEnv 36.76
43 TestFunctional/serial/CopySyncFile 0
44 TestFunctional/serial/StartWithProxy 141.36
45 TestFunctional/serial/AuditLog 0
46 TestFunctional/serial/SoftStart 3.88
47 TestFunctional/serial/KubeContext 0.06
48 TestFunctional/serial/KubectlGetPods 0.28
51 TestFunctional/serial/CacheCmd/cache/add_remote 2.81
52 TestFunctional/serial/CacheCmd/cache/add_local 0.86
53 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
54 TestFunctional/serial/CacheCmd/cache/list 0.07
55 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
56 TestFunctional/serial/CacheCmd/cache/cache_reload 2
57 TestFunctional/serial/CacheCmd/cache/delete 0.15
58 TestFunctional/serial/MinikubeKubectlCmd 0.38
59 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.38
60 TestFunctional/serial/ExtraConfig 93.83
61 TestFunctional/serial/ComponentHealth 0.09
63 TestFunctional/parallel/ConfigCmd 0.52
64 TestFunctional/parallel/DashboardCmd 6.59
65 TestFunctional/parallel/DryRun 0.87
66 TestFunctional/parallel/StatusCmd 1.53
67 TestFunctional/parallel/LogsCmd 6.36
68 TestFunctional/parallel/MountCmd 11.21
70 TestFunctional/parallel/ServiceCmd 13.85
71 TestFunctional/parallel/AddonsCmd 0.24
72 TestFunctional/parallel/PersistentVolumeClaim 45.68
74 TestFunctional/parallel/SSHCmd 0.69
75 TestFunctional/parallel/MySQL 32.09
76 TestFunctional/parallel/FileSync 0.5
77 TestFunctional/parallel/CertSync 1.07
79 TestFunctional/parallel/DockerEnv 1.39
80 TestFunctional/parallel/NodeLabels 0.08
81 TestFunctional/parallel/LoadImage 2.23
82 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
83 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
84 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
85 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
86 TestFunctional/parallel/ProfileCmd/profile_list 0.5
87 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
89 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
91 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
92 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
96 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
100 TestJSONOutput/start/Audit 0
102 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
103 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
105 TestJSONOutput/pause/Audit 0
107 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
108 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
110 TestJSONOutput/unpause/Audit 0
112 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
113 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
115 TestJSONOutput/stop/Audit 0
117 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
118 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
119 TestErrorJSONOutput 0.44
121 TestKicCustomNetwork/create_custom_network 32.43
122 TestKicCustomNetwork/use_default_bridge_network 30.02
123 TestKicExistingNetwork 30.97
124 TestMainNoArgs 0.08
127 TestMultiNode/serial/FreshStart2Nodes 144.49
128 TestMultiNode/serial/AddNode 22.16
129 TestMultiNode/serial/ProfileList 0.36
130 TestMultiNode/serial/StopNode 2.78
131 TestMultiNode/serial/StartAfterStop 54.4
132 TestMultiNode/serial/DeleteNode 6.04
133 TestMultiNode/serial/StopMultiNode 7.68
134 TestMultiNode/serial/RestartMultiNode 464.51
135 TestMultiNode/serial/ValidateNameConflict 33.06
141 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
142 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 15.13
144 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
145 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 12.73
147 TestDebPackageInstall/install_amd64_debian:10/minikube 0
148 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 12.76
150 TestDebPackageInstall/install_amd64_debian:9/minikube 0
151 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 10.26
153 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
154 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 18.54
156 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
157 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 17.36
159 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
160 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 18.29
162 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
163 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 16.1
164 TestPreload 127.02
166 TestScheduledStopUnix 63.51
167 TestSkaffold 78.96
169 TestInsufficientStorage 10.94
170 TestRunningBinaryUpgrade 100.94
172 TestKubernetesUpgrade 220.72
173 TestMissingContainerUpgrade 126.99
175 TestPause/serial/Start 143.95
187 TestPause/serial/SecondStartNoReconfiguration 4.87
188 TestPause/serial/Pause 0.86
189 TestPause/serial/VerifyStatus 0.54
190 TestPause/serial/Unpause 0.79
191 TestPause/serial/PauseAgain 0.95
192 TestPause/serial/DeletePaused 3.17
193 TestPause/serial/VerifyDeletedResources 0.63
201 TestNetworkPlugins/group/auto/Start 150.46
202 TestNetworkPlugins/group/false/Start 135.09
203 TestNetworkPlugins/group/cilium/Start 135.36
204 TestStoppedBinaryUpgrade/MinikubeLogs 7.27
205 TestNetworkPlugins/group/calico/Start 139.1
206 TestNetworkPlugins/group/auto/KubeletFlags 0.36
207 TestNetworkPlugins/group/auto/NetCatPod 9.71
208 TestNetworkPlugins/group/false/KubeletFlags 0.39
209 TestNetworkPlugins/group/false/NetCatPod 9.38
210 TestNetworkPlugins/group/cilium/ControllerPod 5.02
211 TestNetworkPlugins/group/auto/DNS 0.22
212 TestNetworkPlugins/group/auto/Localhost 0.22
213 TestNetworkPlugins/group/auto/HairPin 5.19
214 TestNetworkPlugins/group/cilium/KubeletFlags 0.35
215 TestNetworkPlugins/group/cilium/NetCatPod 10.38
216 TestNetworkPlugins/group/false/DNS 0.31
217 TestNetworkPlugins/group/false/Localhost 0.21
218 TestNetworkPlugins/group/false/HairPin 5.27
219 TestNetworkPlugins/group/custom-weave/Start 138.94
220 TestNetworkPlugins/group/cilium/DNS 0.3
221 TestNetworkPlugins/group/cilium/Localhost 0.27
222 TestNetworkPlugins/group/enable-default-cni/Start 165.22
223 TestNetworkPlugins/group/cilium/HairPin 0.26
224 TestNetworkPlugins/group/kindnet/Start 144.65
225 TestNetworkPlugins/group/calico/ControllerPod 5.02
226 TestNetworkPlugins/group/calico/KubeletFlags 0.38
227 TestNetworkPlugins/group/calico/NetCatPod 11.74
228 TestNetworkPlugins/group/calico/DNS 0.24
229 TestNetworkPlugins/group/calico/Localhost 0.23
230 TestNetworkPlugins/group/calico/HairPin 0.21
231 TestNetworkPlugins/group/bridge/Start 129.99
232 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.36
233 TestNetworkPlugins/group/custom-weave/NetCatPod 8.32
234 TestNetworkPlugins/group/kubenet/Start 119.67
235 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
236 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
237 TestNetworkPlugins/group/kindnet/NetCatPod 9.37
238 TestNetworkPlugins/group/kindnet/DNS 0.25
239 TestNetworkPlugins/group/kindnet/Localhost 0.23
240 TestNetworkPlugins/group/kindnet/HairPin 0.23
241 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
242 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.56
244 TestStartStop/group/old-k8s-version/serial/FirstStart 153.88
245 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
246 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
247 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
249 TestStartStop/group/no-preload/serial/FirstStart 231.9
250 TestNetworkPlugins/group/bridge/KubeletFlags 0.56
251 TestNetworkPlugins/group/bridge/NetCatPod 15.25
252 TestNetworkPlugins/group/bridge/DNS 0.24
253 TestNetworkPlugins/group/bridge/Localhost 0.22
254 TestNetworkPlugins/group/bridge/HairPin 0.23
256 TestStartStop/group/default-k8s-different-port/serial/FirstStart 138.56
257 TestNetworkPlugins/group/kubenet/KubeletFlags 0.35
258 TestNetworkPlugins/group/kubenet/NetCatPod 9.33
259 TestNetworkPlugins/group/kubenet/DNS 0.21
260 TestNetworkPlugins/group/kubenet/Localhost 0.19
261 TestNetworkPlugins/group/kubenet/HairPin 0.2
263 TestStartStop/group/newest-cni/serial/FirstStart 117.91
264 TestStartStop/group/old-k8s-version/serial/DeployApp 9.61
265 TestStartStop/group/old-k8s-version/serial/Stop 11.21
266 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
267 TestStartStop/group/old-k8s-version/serial/SecondStart 71.61
268 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.7
269 TestStartStop/group/default-k8s-different-port/serial/Stop 11.25
270 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.27
271 TestStartStop/group/default-k8s-different-port/serial/SecondStart 92.08
272 TestStartStop/group/newest-cni/serial/DeployApp 0
273 TestStartStop/group/newest-cni/serial/Stop 1.51
274 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
275 TestStartStop/group/newest-cni/serial/SecondStart 84.14
276 TestStartStop/group/no-preload/serial/DeployApp 9.47
277 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
278 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.01
279 TestStartStop/group/no-preload/serial/Stop 11.18
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
281 TestStartStop/group/old-k8s-version/serial/Pause 3.3
283 TestStartStop/group/embed-certs/serial/FirstStart 132.15
284 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
285 TestStartStop/group/no-preload/serial/SecondStart 80.7
286 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 8.02
287 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.01
288 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.38
289 TestStartStop/group/default-k8s-different-port/serial/Pause 3.54
290 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
291 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
292 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
293 TestStartStop/group/newest-cni/serial/Pause 3.15
294 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 29.02
295 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.01
296 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.35
297 TestStartStop/group/no-preload/serial/Pause 3.24
298 TestStartStop/group/embed-certs/serial/DeployApp 9.46
299 TestStartStop/group/embed-certs/serial/Stop 11.13
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
301 TestStartStop/group/embed-certs/serial/SecondStart 101.51
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.02
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.01
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
305 TestStartStop/group/embed-certs/serial/Pause 3.16
x
+
TestDownloadOnly/v1.14.0/json-events (7.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:68: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210310004129-1084876 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:68: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210310004129-1084876 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.372672877s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (7.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
--- PASS: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/json-events (7.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/json-events
aaa_download_only_test.go:68: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210310004129-1084876 --force --alsologtostderr --kubernetes-version=v1.20.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:68: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210310004129-1084876 --force --alsologtostderr --kubernetes-version=v1.20.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.143274749s)
--- PASS: TestDownloadOnly/v1.20.2/json-events (7.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/preload-exists
--- PASS: TestDownloadOnly/v1.20.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/binaries
--- PASS: TestDownloadOnly/v1.20.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.5-rc.0/json-events (6.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.5-rc.0/json-events
aaa_download_only_test.go:68: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210310004129-1084876 --force --alsologtostderr --kubernetes-version=v1.20.5-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:68: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210310004129-1084876 --force --alsologtostderr --kubernetes-version=v1.20.5-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.380373481s)
--- PASS: TestDownloadOnly/v1.20.5-rc.0/json-events (6.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.5-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.5-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.5-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.5-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.5-rc.0/binaries
--- PASS: TestDownloadOnly/v1.20.5-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.79s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:170: (dbg) Run:  out/minikube-linux-amd64 delete --all
aaa_download_only_test.go:170: (dbg) Done: out/minikube-linux-amd64 delete --all: (1.78928206s)
--- PASS: TestDownloadOnly/DeleteAll (1.79s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.73s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:182: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210310004129-1084876
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.73s)

                                                
                                    
x
+
TestDownloadOnlyKic (11.25s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20210310004153-1084876 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:206: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210310004153-1084876 --force --alsologtostderr --driver=docker  --container-runtime=docker: (7.806681509s)
helpers_test.go:171: Cleaning up "download-docker-20210310004153-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20210310004153-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p download-docker-20210310004153-1084876: (2.149318942s)
--- PASS: TestDownloadOnlyKic (11.25s)

                                                
                                    
x
+
TestOffline (146.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:54: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20210310011543-1084876 --alsologtostderr -v=1 --memory=2000 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:54: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20210310011543-1084876 --alsologtostderr -v=1 --memory=2000 --wait=true --driver=docker  --container-runtime=docker: (2m23.859187053s)
helpers_test.go:171: Cleaning up "offline-docker-20210310011543-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20210310011543-1084876

                                                
                                                
=== CONT  TestOffline
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20210310011543-1084876: (2.992818375s)
--- PASS: TestOffline (146.85s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:237: registry stabilized in 19.061111ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:239: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:335: "registry-bqsz2" [28fe6c05-7f77-44d8-a3d2-839555a750a4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:239: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01946801s
addons_test.go:242: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:335: "registry-proxy-jgjdk" [450bf9c8-1e1d-41f4-a105-2836463b7ccd] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:242: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010078557s
addons_test.go:247: (dbg) Run:  kubectl --context addons-20210310004204-1084876 delete po -l run=registry-test --now
addons_test.go:252: (dbg) Run:  kubectl --context addons-20210310004204-1084876 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:252: (dbg) Done: kubectl --context addons-20210310004204-1084876 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.090145579s)
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210310004204-1084876 ip
2021/03/10 00:44:44 [DEBUG] GET http://192.168.49.205:5000
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210310004204-1084876 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.93s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (15.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:155: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "kube-system" ...
helpers_test.go:335: "ingress-nginx-admission-create-tw5ql" [5d50f961-79cf-4f1f-a451-8cc9d398cc8b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:155: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 5.467339ms
addons_test.go:160: (dbg) Run:  kubectl --context addons-20210310004204-1084876 replace --force -f testdata/nginx-ing.yaml
addons_test.go:165: kubectl --context addons-20210310004204-1084876 replace --force -f testdata/nginx-ing.yaml: unexpected stderr: Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
(may be temporary)
addons_test.go:174: (dbg) Run:  kubectl --context addons-20210310004204-1084876 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:179: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:335: "nginx" [b0ab56c3-83b0-4b81-add0-86c7ed0dddce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:335: "nginx" [b0ab56c3-83b0-4b81-add0-86c7ed0dddce] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:179: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.006214167s
addons_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210310004204-1084876 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:219: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210310004204-1084876 addons disable ingress --alsologtostderr -v=1
addons_test.go:219: (dbg) Done: out/minikube-linux-amd64 -p addons-20210310004204-1084876 addons disable ingress --alsologtostderr -v=1: (2.289986451s)
--- PASS: TestAddons/parallel/Ingress (15.15s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (47.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:313: metrics-server stabilized in 18.940972ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:315: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:335: "metrics-server-56c4f8c9d6-s86jb" [f3f5927f-9bfd-4423-a9ef-ec58fb96681d] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:315: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.020077594s
addons_test.go:321: (dbg) Run:  kubectl --context addons-20210310004204-1084876 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:321: (dbg) Non-zero exit: kubectl --context addons-20210310004204-1084876 top pods -n kube-system: exit status 1 (117.200328ms)

                                                
                                                
** stderr ** 
	W0310 00:44:36.176810 1097379 top_pod.go:265] Metrics not available for pod kube-system/etcd-addons-20210310004204-1084876, age: 2m1.176794226s
	error: Metrics not available for pod kube-system/etcd-addons-20210310004204-1084876, age: 2m1.176794226s

                                                
                                                
** /stderr **
addons_test.go:321: (dbg) Run:  kubectl --context addons-20210310004204-1084876 top pods -n kube-system
addons_test.go:321: (dbg) Non-zero exit: kubectl --context addons-20210310004204-1084876 top pods -n kube-system: exit status 1 (112.802635ms)

                                                
                                                
** stderr ** 
	W0310 00:44:39.608647 1097666 top_pod.go:265] Metrics not available for pod kube-system/etcd-addons-20210310004204-1084876, age: 2m4.608639064s
	error: Metrics not available for pod kube-system/etcd-addons-20210310004204-1084876, age: 2m4.608639064s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:321: (dbg) Run:  kubectl --context addons-20210310004204-1084876 top pods -n kube-system
addons_test.go:321: (dbg) Non-zero exit: kubectl --context addons-20210310004204-1084876 top pods -n kube-system: exit status 1 (109.286613ms)

                                                
                                                
** stderr ** 
	W0310 00:44:46.200417 1098710 top_pod.go:265] Metrics not available for pod kube-system/etcd-addons-20210310004204-1084876, age: 2m11.200406216s
	error: Metrics not available for pod kube-system/etcd-addons-20210310004204-1084876, age: 2m11.200406216s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:321: (dbg) Run:  kubectl --context addons-20210310004204-1084876 top pods -n kube-system
addons_test.go:321: (dbg) Non-zero exit: kubectl --context addons-20210310004204-1084876 top pods -n kube-system: exit status 1 (108.954729ms)

                                                
                                                
** stderr ** 
	W0310 00:44:54.170013 1099534 top_pod.go:265] Metrics not available for pod kube-system/coredns-74ff55c5b-xlj4r, age: 2m1.170000999s
	error: Metrics not available for pod kube-system/coredns-74ff55c5b-xlj4r, age: 2m1.170000999s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:321: (dbg) Run:  kubectl --context addons-20210310004204-1084876 top pods -n kube-system
addons_test.go:321: (dbg) Non-zero exit: kubectl --context addons-20210310004204-1084876 top pods -n kube-system: exit status 1 (88.706937ms)

                                                
                                                
** stderr ** 
	W0310 00:45:03.753878 1100154 top_pod.go:265] Metrics not available for pod kube-system/coredns-74ff55c5b-xlj4r, age: 2m10.753868371s
	error: Metrics not available for pod kube-system/coredns-74ff55c5b-xlj4r, age: 2m10.753868371s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:321: (dbg) Run:  kubectl --context addons-20210310004204-1084876 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:339: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210310004204-1084876 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:339: (dbg) Done: out/minikube-linux-amd64 -p addons-20210310004204-1084876 addons disable metrics-server --alsologtostderr -v=1: (1.108072865s)
--- PASS: TestAddons/parallel/MetricsServer (47.97s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.66s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:358: tiller-deploy stabilized in 19.153846ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:360: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:335: "tiller-deploy-7c86b7fbdf-k4qzh" [1c948044-7876-4077-a225-b7964c8e9b4e] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:360: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.017178621s
addons_test.go:375: (dbg) Run:  kubectl --context addons-20210310004204-1084876 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:375: (dbg) Done: kubectl --context addons-20210310004204-1084876 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (4.118210972s)
addons_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210310004204-1084876 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (191s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:473: csi-hostpath-driver pods stabilized in 6.254732ms
addons_test.go:476: (dbg) Run:  kubectl --context addons-20210310004204-1084876 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:481: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:486: (dbg) Run:  kubectl --context addons-20210310004204-1084876 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:491: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:335: "task-pv-pod" [1a5b6d85-926f-4839-acce-3bc95a554414] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:335: "task-pv-pod" [1a5b6d85-926f-4839-acce-3bc95a554414] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:335: "task-pv-pod" [1a5b6d85-926f-4839-acce-3bc95a554414] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:491: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.006372281s
addons_test.go:496: (dbg) Run:  kubectl --context addons-20210310004204-1084876 create -f testdata/csi-hostpath-driver/snapshotclass.yaml
addons_test.go:502: (dbg) Run:  kubectl --context addons-20210310004204-1084876 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:507: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:512: (dbg) Run:  kubectl --context addons-20210310004204-1084876 delete pod task-pv-pod
addons_test.go:512: (dbg) Done: kubectl --context addons-20210310004204-1084876 delete pod task-pv-pod: (57.386637053s)
addons_test.go:518: (dbg) Run:  kubectl --context addons-20210310004204-1084876 delete pvc hpvc
addons_test.go:524: (dbg) Run:  kubectl --context addons-20210310004204-1084876 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210310004204-1084876 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:534: (dbg) Run:  kubectl --context addons-20210310004204-1084876 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:335: "task-pv-pod-restore" [317b70bd-1afe-44e3-b628-4795978e4e4b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:335: "task-pv-pod-restore" [317b70bd-1afe-44e3-b628-4795978e4e4b] Running
addons_test.go:539: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.015423832s
addons_test.go:544: (dbg) Run:  kubectl --context addons-20210310004204-1084876 delete pod task-pv-pod-restore
addons_test.go:544: (dbg) Done: kubectl --context addons-20210310004204-1084876 delete pod task-pv-pod-restore: (47.273427433s)
addons_test.go:548: (dbg) Run:  kubectl --context addons-20210310004204-1084876 delete pvc hpvc-restore
addons_test.go:552: (dbg) Run:  kubectl --context addons-20210310004204-1084876 delete volumesnapshot new-snapshot-demo
addons_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210310004204-1084876 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:556: (dbg) Done: out/minikube-linux-amd64 -p addons-20210310004204-1084876 addons disable csi-hostpath-driver --alsologtostderr -v=1: (5.240880047s)
addons_test.go:560: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210310004204-1084876 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (191.00s)

                                                
                                    
x
+
TestCertOptions (53.94s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210310011929-1084876 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210310011929-1084876 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (50.440394208s)
cert_options_test.go:57: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210310011929-1084876 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:72: (dbg) Run:  kubectl --context cert-options-20210310011929-1084876 config view
helpers_test.go:171: Cleaning up "cert-options-20210310011929-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210310011929-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210310011929-1084876: (3.026525281s)
--- PASS: TestCertOptions (53.94s)

                                                
                                    
x
+
TestDockerFlags (56.06s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20210310011924-1084876 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20210310011924-1084876 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.849745883s)
docker_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20210310011924-1084876 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20210310011924-1084876 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:60: (dbg) Done: out/minikube-linux-amd64 -p docker-flags-20210310011924-1084876 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.490505038s)
helpers_test.go:171: Cleaning up "docker-flags-20210310011924-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20210310011924-1084876

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20210310011924-1084876: (11.260809834s)
--- PASS: TestDockerFlags (56.06s)

                                                
                                    
x
+
TestForceSystemdFlag (42.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210310011847-1084876 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210310011847-1084876 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.269563981s)
docker_test.go:99: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20210310011847-1084876 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:171: Cleaning up "force-systemd-flag-20210310011847-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210310011847-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210310011847-1084876: (3.721684079s)
--- PASS: TestForceSystemdFlag (42.68s)

                                                
                                    
x
+
TestForceSystemdEnv (36.76s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210310011810-1084876 --memory=1800 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210310011810-1084876 --memory=1800 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.040529201s)
docker_test.go:99: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20210310011810-1084876 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:171: Cleaning up "force-systemd-env-20210310011810-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210310011810-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210310011810-1084876: (3.234683244s)
--- PASS: TestForceSystemdEnv (36.76s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1202: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/files/etc/test/nested/copy/1084876/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (141.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:284: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210310004806-1084876 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:284: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210310004806-1084876 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (2m21.362905277s)
--- PASS: TestFunctional/serial/StartWithProxy (141.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:327: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210310004806-1084876 --alsologtostderr -v=8
functional_test.go:327: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210310004806-1084876 --alsologtostderr -v=8: (3.878514659s)
functional_test.go:331: soft start took 3.879321027s for "functional-20210310004806-1084876" cluster.
--- PASS: TestFunctional/serial/SoftStart (3.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:347: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:360: (dbg) Run:  kubectl --context functional-20210310004806-1084876 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 cache add k8s.gcr.io/pause:3.1
functional_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 cache add k8s.gcr.io/pause:3.3
functional_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 cache add k8s.gcr.io/pause:latest
functional_test.go:641: (dbg) Done: out/minikube-linux-amd64 -p functional-20210310004806-1084876 cache add k8s.gcr.io/pause:latest: (1.104541035s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:670: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210310004806-1084876 /tmp/functional-20210310004806-1084876001641252
functional_test.go:675: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 cache add minikube-local-cache-test:functional-20210310004806-1084876
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:682: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:689: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:702: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:724: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:730: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:730: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (325.449118ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:735: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 cache reload
functional_test.go:740: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 kubectl -- --context functional-20210310004806-1084876 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:397: (dbg) Run:  out/kubectl --context functional-20210310004806-1084876 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.38s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (93.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:410: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210310004806-1084876 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:410: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210310004806-1084876 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m33.829710085s)
functional_test.go:414: restart took 1m33.830026099s for "functional-20210310004806-1084876" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (93.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:461: (dbg) Run:  kubectl --context functional-20210310004806-1084876 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:475: etcd phase: Running
functional_test.go:485: etcd status: Ready
functional_test.go:475: kube-apiserver phase: Running
functional_test.go:485: kube-apiserver status: Ready
functional_test.go:475: kube-controller-manager phase: Running
functional_test.go:485: kube-controller-manager status: Ready
functional_test.go:475: kube-scheduler phase: Running
functional_test.go:485: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:775: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:775: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 config get cpus
functional_test.go:775: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210310004806-1084876 config get cpus: exit status 14 (76.486762ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:775: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 config set cpus 2
functional_test.go:775: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 config get cpus
functional_test.go:775: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:775: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 config get cpus
functional_test.go:775: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210310004806-1084876 config get cpus: exit status 14 (79.919057ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:551: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url -p functional-20210310004806-1084876 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:556: (dbg) stopping [out/minikube-linux-amd64 dashboard --url -p functional-20210310004806-1084876 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:499: unable to kill pid 1119985: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:613: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210310004806-1084876 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:613: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210310004806-1084876 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (400.953598ms)

                                                
                                                
-- stdout --
	* [functional-20210310004806-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube
	  - MINIKUBE_LOCATION=10730
	* Using the docker driver based on existing profile
	  - More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 00:52:37.513241 1119482 out.go:239] Setting OutFile to fd 1 ...
	I0310 00:52:37.513325 1119482 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 00:52:37.513330 1119482 out.go:252] Setting ErrFile to fd 2...
	I0310 00:52:37.513334 1119482 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 00:52:37.513460 1119482 root.go:308] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/bin
	I0310 00:52:37.513731 1119482 out.go:246] Setting JSON to false
	I0310 00:52:37.569497 1119482 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":16518,"bootTime":1615321039,"procs":228,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0310 00:52:37.569629 1119482 start.go:118] virtualization: kvm guest
	I0310 00:52:37.573772 1119482 out.go:129] * [functional-20210310004806-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)
	I0310 00:52:37.576610 1119482 out.go:129]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	I0310 00:52:37.634541 1119482 out.go:129]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0310 00:52:37.638031 1119482 out.go:129]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube
	I0310 00:52:37.641666 1119482 out.go:129]   - MINIKUBE_LOCATION=10730
	I0310 00:52:37.643018 1119482 driver.go:317] Setting default libvirt URI to qemu:///system
	I0310 00:52:37.708776 1119482 docker.go:119] docker version: linux-19.03.15
	I0310 00:52:37.708897 1119482 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 00:52:37.819914 1119482 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:98 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2021-03-10 00:52:37.758454166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628283904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0310 00:52:37.820008 1119482 docker.go:216] overlay module found
	I0310 00:52:37.823737 1119482 out.go:129] * Using the docker driver based on existing profile
	I0310 00:52:37.823766 1119482 start.go:276] selected driver: docker
	I0310 00:52:37.823773 1119482 start.go:718] validating driver "docker" against &{Name:functional-20210310004806-1084876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:functional-20210310004806-1084876 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.205 Port:8441 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-g
luster:false volumesnapshots:false] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 00:52:37.823889 1119482 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0310 00:52:37.823928 1119482 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0310 00:52:37.824051 1119482 out.go:191] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0310 00:52:37.826062 1119482 out.go:129]   - More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0310 00:52:37.828624 1119482 out.go:129] 
	W0310 00:52:37.828785 1119482 out.go:191] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0310 00:52:37.830886 1119482 out.go:129] 

                                                
                                                
** /stderr **
functional_test.go:624: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210310004806-1084876 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:503: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 status
functional_test.go:509: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:520: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/LogsCmd (6.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsCmd
=== PAUSE TestFunctional/parallel/LogsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:793: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 logs

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:793: (dbg) Done: out/minikube-linux-amd64 -p functional-20210310004806-1084876 logs: (6.355139173s)
--- PASS: TestFunctional/parallel/LogsCmd (6.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
fn_mount_cmd_test.go:72: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210310004806-1084876 /tmp/mounttest528223795:/mount-9p --alsologtostderr -v=1]
fn_mount_cmd_test.go:106: wrote "test-1615337534543777894" to /tmp/mounttest528223795/created-by-test
fn_mount_cmd_test.go:106: wrote "test-1615337534543777894" to /tmp/mounttest528223795/created-by-test-removed-by-pod
fn_mount_cmd_test.go:106: wrote "test-1615337534543777894" to /tmp/mounttest528223795/test-1615337534543777894
fn_mount_cmd_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh "findmnt -T /mount-9p | grep 9p"
fn_mount_cmd_test.go:114: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (399.865545ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
fn_mount_cmd_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
fn_mount_cmd_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
fn_mount_cmd_test.go:132: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 10 00:52 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 10 00:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 10 00:52 test-1615337534543777894
fn_mount_cmd_test.go:136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh cat /mount-9p/test-1615337534543777894
fn_mount_cmd_test.go:147: (dbg) Run:  kubectl --context functional-20210310004806-1084876 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
fn_mount_cmd_test.go:152: (dbg) TestFunctional/parallel/MountCmd: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:335: "busybox-mount" [e88111ad-b8f7-4eaf-b08f-c6e2f633610c] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:335: "busybox-mount" [e88111ad-b8f7-4eaf-b08f-c6e2f633610c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:335: "busybox-mount" [e88111ad-b8f7-4eaf-b08f-c6e2f633610c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
fn_mount_cmd_test.go:152: (dbg) TestFunctional/parallel/MountCmd: integration-test=busybox-mount healthy within 7.006102881s
fn_mount_cmd_test.go:168: (dbg) Run:  kubectl --context functional-20210310004806-1084876 logs busybox-mount
fn_mount_cmd_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh stat /mount-9p/created-by-test
fn_mount_cmd_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh stat /mount-9p/created-by-pod
fn_mount_cmd_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh "sudo umount -f /mount-9p"
fn_mount_cmd_test.go:93: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210310004806-1084876 /tmp/mounttest528223795:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:974: (dbg) Run:  kubectl --context functional-20210310004806-1084876 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:980: (dbg) Run:  kubectl --context functional-20210310004806-1084876 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:985: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:335: "hello-node-6cbfcd7cbc-9cqrb" [ef7c32b0-c235-4f3c-96eb-111752d202eb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:335: "hello-node-6cbfcd7cbc-9cqrb" [ef7c32b0-c235-4f3c-96eb-111752d202eb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:985: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.007502105s
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 service --namespace=default --https --url hello-node
functional_test.go:1011: found endpoint: https://192.168.49.205:30636
functional_test.go:1022: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 service hello-node --url --format={{.IP}}
functional_test.go:1031: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 service hello-node --url
functional_test.go:1037: found endpoint for hello-node: http://192.168.49.205:30636
functional_test.go:1048: Attempting to fetch http://192.168.49.205:30636 ...
functional_test.go:1067: http://192.168.49.205:30636: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-9cqrb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.205:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.205:30636
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (13.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1093: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:43: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:335: "storage-provisioner" [ad9878e0-63fd-41e3-ab6d-367eff60a16f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:43: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.062063417s
fn_pvc_test.go:48: (dbg) Run:  kubectl --context functional-20210310004806-1084876 get storageclass -o=json
fn_pvc_test.go:68: (dbg) Run:  kubectl --context functional-20210310004806-1084876 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20210310004806-1084876 get pvc myclaim -o=json
fn_pvc_test.go:124: (dbg) Run:  kubectl --context functional-20210310004806-1084876 apply -f testdata/storage-provisioner/pod.yaml
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:335: "sp-pod" [6286c72f-f0d0-4b3f-bea9-1d4b671f1a15] Pending
helpers_test.go:335: "sp-pod" [6286c72f-f0d0-4b3f-bea9-1d4b671f1a15] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:335: "sp-pod" [6286c72f-f0d0-4b3f-bea9-1d4b671f1a15] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.008282791s
fn_pvc_test.go:99: (dbg) Run:  kubectl --context functional-20210310004806-1084876 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:105: (dbg) Run:  kubectl --context functional-20210310004806-1084876 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:105: (dbg) Done: kubectl --context functional-20210310004806-1084876 delete -f testdata/storage-provisioner/pod.yaml: (5.245415533s)
fn_pvc_test.go:124: (dbg) Run:  kubectl --context functional-20210310004806-1084876 apply -f testdata/storage-provisioner/pod.yaml
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:335: "sp-pod" [8e415861-af9a-4273-ae40-9f835a972f73] Pending
helpers_test.go:335: "sp-pod" [8e415861-af9a-4273-ae40-9f835a972f73] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:335: "sp-pod" [8e415861-af9a-4273-ae40-9f835a972f73] Running
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010913828s
fn_pvc_test.go:113: (dbg) Run:  kubectl --context functional-20210310004806-1084876 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1115: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh "echo hello"
functional_test.go:1132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1154: (dbg) Run:  kubectl --context functional-20210310004806-1084876 replace --force -f testdata/mysql.yaml
functional_test.go:1159: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:335: "mysql-9bbbc5bbb-7fx84" [7dd9513d-c16d-49dc-829e-3e0380c7ba41] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:335: "mysql-9bbbc5bbb-7fx84" [7dd9513d-c16d-49dc-829e-3e0380c7ba41] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:335: "mysql-9bbbc5bbb-7fx84" [7dd9513d-c16d-49dc-829e-3e0380c7ba41] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1159: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.010471998s
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310004806-1084876 exec mysql-9bbbc5bbb-7fx84 -- mysql -ppassword -e "show databases;"
functional_test.go:1166: (dbg) Non-zero exit: kubectl --context functional-20210310004806-1084876 exec mysql-9bbbc5bbb-7fx84 -- mysql -ppassword -e "show databases;": exit status 1 (218.546558ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310004806-1084876 exec mysql-9bbbc5bbb-7fx84 -- mysql -ppassword -e "show databases;"
functional_test.go:1166: (dbg) Non-zero exit: kubectl --context functional-20210310004806-1084876 exec mysql-9bbbc5bbb-7fx84 -- mysql -ppassword -e "show databases;": exit status 1 (303.537606ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310004806-1084876 exec mysql-9bbbc5bbb-7fx84 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1166: (dbg) Non-zero exit: kubectl --context functional-20210310004806-1084876 exec mysql-9bbbc5bbb-7fx84 -- mysql -ppassword -e "show databases;": exit status 1 (218.729982ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310004806-1084876 exec mysql-9bbbc5bbb-7fx84 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1166: (dbg) Non-zero exit: kubectl --context functional-20210310004806-1084876 exec mysql-9bbbc5bbb-7fx84 -- mysql -ppassword -e "show databases;": exit status 1 (197.484982ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310004806-1084876 exec mysql-9bbbc5bbb-7fx84 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1250: Checking for existence of /etc/test/nested/copy/1084876/hosts within VM
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh "sudo cat /etc/test/nested/copy/1084876/hosts"
functional_test.go:1256: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1291: Checking for existence of /etc/ssl/certs/1084876.pem within VM
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh "sudo cat /etc/ssl/certs/1084876.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1291: Checking for existence of /usr/share/ca-certificates/1084876.pem within VM
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh "sudo cat /usr/share/ca-certificates/1084876.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1291: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 ssh "sudo cat /etc/ssl/certs/51391683.0"
--- PASS: TestFunctional/parallel/CertSync (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:231: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20210310004806-1084876 docker-env) && out/minikube-linux-amd64 status -p functional-20210310004806-1084876"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:251: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20210310004806-1084876 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:152: (dbg) Run:  kubectl --context functional-20210310004806-1084876 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:175: (dbg) Run:  docker pull busybox:latest

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:182: (dbg) Run:  docker tag busybox:latest busybox:functional-20210310004806-1084876

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:188: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 image load busybox:functional-20210310004806-1084876

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:188: (dbg) Done: out/minikube-linux-amd64 -p functional-20210310004806-1084876 image load busybox:functional-20210310004806-1084876: (1.076582199s)
functional_test.go:205: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210310004806-1084876 -- docker image inspect busybox:functional-20210310004806-1084876
--- PASS: TestFunctional/parallel/LoadImage (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210310004806-1084876 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:819: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:823: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:857: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:862: Took "405.627617ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:876: Took "89.982866ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:907: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:912: Took "438.50612ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:920: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:925: Took "98.621499ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
fn_tunnel_cmd_test.go:125: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210310004806-1084876 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
fn_tunnel_cmd_test.go:163: (dbg) Run:  kubectl --context functional-20210310004806-1084876 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
fn_tunnel_cmd_test.go:228: tunnel at http://10.105.209.104 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
fn_tunnel_cmd_test.go:363: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210310004806-1084876 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.44s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:144: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210310005507-1084876 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:144: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210310005507-1084876 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (125.310266ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210310005507-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"f8926580-2e79-47af-a7ae-557235d55f3b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig"},"datacontenttype":"application/json","id":"3db123ff-99ed-465a-92a2-c2e0adb2b160","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"5e34d32d-1adc-4e34-9a46-1474c0f786c8","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube"},"datacontenttype":"application/json","id":"af85d04e-ae36-4ec6-bfe7-5db692c98ad2","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=10730"},"datacontenttype":"application/json","id":"9f3ff57c-023a-424b-a253-35f784776dde","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"9b259ffd-49ad-4828-824a-5e9d0b85871f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:171: Cleaning up "json-output-error-20210310005507-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210310005507-1084876
--- PASS: TestErrorJSONOutput (0.44s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210310005507-1084876 --network=
kic_custom_network_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210310005507-1084876 --network=: (29.519142473s)
kic_custom_network_test.go:99: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:171: Cleaning up "docker-network-20210310005507-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210310005507-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210310005507-1084876: (2.86095752s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.43s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210310005540-1084876 --network=bridge
kic_custom_network_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210310005540-1084876 --network=bridge: (27.377975836s)
kic_custom_network_test.go:99: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:171: Cleaning up "docker-network-20210310005540-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210310005540-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210310005540-1084876: (2.593386154s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.02s)

                                                
                                    
x
+
TestKicExistingNetwork (30.97s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:99: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20210310005610-1084876 --network=existing-network
kic_custom_network_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210310005610-1084876 --network=existing-network: (28.296849836s)
helpers_test.go:171: Cleaning up "existing-network-20210310005610-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20210310005610-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210310005610-1084876: (2.361949469s)
--- PASS: TestKicExistingNetwork (30.97s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (144.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:73: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210310005641-1084876 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:73: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210310005641-1084876 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (2m23.865881245s)
multinode_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (144.49s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:97: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210310005641-1084876 -v 3 --alsologtostderr
multinode_test.go:97: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210310005641-1084876 -v 3 --alsologtostderr: (21.298650006s)
multinode_test.go:103: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (22.16s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:118: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 node stop m03
multinode_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210310005641-1084876 node stop m03: (1.423059076s)
multinode_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status
multinode_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status: exit status 7 (663.361076ms)

                                                
                                                
-- stdout --
	multinode-20210310005641-1084876
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	timeToStop: Nonexistent
	
	multinode-20210310005641-1084876-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210310005641-1084876-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:170: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status --alsologtostderr
multinode_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status --alsologtostderr: exit status 7 (697.385363ms)

                                                
                                                
-- stdout --
	multinode-20210310005641-1084876
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	timeToStop: Nonexistent
	
	multinode-20210310005641-1084876-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210310005641-1084876-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 00:59:30.433931 1156066 out.go:239] Setting OutFile to fd 1 ...
	I0310 00:59:30.434313 1156066 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 00:59:30.434327 1156066 out.go:252] Setting ErrFile to fd 2...
	I0310 00:59:30.434331 1156066 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 00:59:30.434448 1156066 root.go:308] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/bin
	I0310 00:59:30.434663 1156066 out.go:246] Setting JSON to false
	I0310 00:59:30.434686 1156066 mustload.go:66] Loading cluster: multinode-20210310005641-1084876
	I0310 00:59:30.435004 1156066 status.go:241] checking status of multinode-20210310005641-1084876 ...
	I0310 00:59:30.435530 1156066 cli_runner.go:115] Run: docker container inspect multinode-20210310005641-1084876 --format={{.State.Status}}
	I0310 00:59:30.484225 1156066 status.go:317] multinode-20210310005641-1084876 host status = "Running" (err=<nil>)
	I0310 00:59:30.484272 1156066 host.go:66] Checking if "multinode-20210310005641-1084876" exists ...
	I0310 00:59:30.484701 1156066 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210310005641-1084876
	I0310 00:59:30.533204 1156066 host.go:66] Checking if "multinode-20210310005641-1084876" exists ...
	I0310 00:59:30.533553 1156066 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 00:59:30.533639 1156066 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310005641-1084876
	I0310 00:59:30.586000 1156066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/multinode-20210310005641-1084876/id_rsa Username:docker}
	I0310 00:59:30.678219 1156066 ssh_runner.go:149] Run: systemctl --version
	I0310 00:59:30.682771 1156066 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 00:59:30.694535 1156066 kubeconfig.go:93] found "multinode-20210310005641-1084876" server: "https://192.168.49.205:8443"
	I0310 00:59:30.694567 1156066 api_server.go:146] Checking apiserver status ...
	I0310 00:59:30.694605 1156066 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 00:59:30.715721 1156066 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1988/cgroup
	I0310 00:59:30.725115 1156066 api_server.go:162] apiserver freezer: "7:freezer:/docker/da31247dd39a3a30c2be84967789e7b970d1a68afac1cbf999fc438d8dc55942/kubepods/burstable/pod1c57d98b494c80369e7b2356dce854c3/3340bb8123cf918ab815c952a5a5838e05f97b5b26c5cdad389e29b5ff2ed2f2"
	I0310 00:59:30.725181 1156066 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/da31247dd39a3a30c2be84967789e7b970d1a68afac1cbf999fc438d8dc55942/kubepods/burstable/pod1c57d98b494c80369e7b2356dce854c3/3340bb8123cf918ab815c952a5a5838e05f97b5b26c5cdad389e29b5ff2ed2f2/freezer.state
	I0310 00:59:30.733050 1156066 api_server.go:184] freezer state: "THAWED"
	I0310 00:59:30.733108 1156066 api_server.go:221] Checking apiserver healthz at https://192.168.49.205:8443/healthz ...
	I0310 00:59:30.739201 1156066 api_server.go:241] https://192.168.49.205:8443/healthz returned 200:
	ok
	I0310 00:59:30.739225 1156066 status.go:402] multinode-20210310005641-1084876 apiserver status = Running (err=<nil>)
	I0310 00:59:30.739236 1156066 status.go:243] multinode-20210310005641-1084876 status: &{Name:multinode-20210310005641-1084876 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop:Nonexistent}
	I0310 00:59:30.739260 1156066 status.go:241] checking status of multinode-20210310005641-1084876-m02 ...
	I0310 00:59:30.739632 1156066 cli_runner.go:115] Run: docker container inspect multinode-20210310005641-1084876-m02 --format={{.State.Status}}
	I0310 00:59:30.786620 1156066 status.go:317] multinode-20210310005641-1084876-m02 host status = "Running" (err=<nil>)
	I0310 00:59:30.786653 1156066 host.go:66] Checking if "multinode-20210310005641-1084876-m02" exists ...
	I0310 00:59:30.786954 1156066 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210310005641-1084876-m02
	I0310 00:59:30.833213 1156066 host.go:66] Checking if "multinode-20210310005641-1084876-m02" exists ...
	I0310 00:59:30.833584 1156066 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 00:59:30.833633 1156066 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310005641-1084876-m02
	I0310 00:59:30.882664 1156066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/machines/multinode-20210310005641-1084876-m02/id_rsa Username:docker}
	I0310 00:59:30.998132 1156066 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 00:59:31.009339 1156066 status.go:243] multinode-20210310005641-1084876-m02 status: &{Name:multinode-20210310005641-1084876-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop:Nonexistent}
	I0310 00:59:31.009387 1156066 status.go:241] checking status of multinode-20210310005641-1084876-m03 ...
	I0310 00:59:31.009757 1156066 cli_runner.go:115] Run: docker container inspect multinode-20210310005641-1084876-m03 --format={{.State.Status}}
	I0310 00:59:31.058739 1156066 status.go:317] multinode-20210310005641-1084876-m03 host status = "Stopped" (err=<nil>)
	I0310 00:59:31.058772 1156066 status.go:330] host is not running, skipping remaining checks
	I0310 00:59:31.058781 1156066 status.go:243] multinode-20210310005641-1084876-m03 status: &{Name:multinode-20210310005641-1084876-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop:Nonexistent}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.78s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:190: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 node start m03 --alsologtostderr
multinode_test.go:200: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210310005641-1084876 node start m03 --alsologtostderr: (53.238621065s)
multinode_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status
multinode_test.go:221: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (54.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 node delete m03
multinode_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210310005641-1084876 node delete m03: (5.222409916s)
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status --alsologtostderr
multinode_test.go:328: (dbg) Run:  docker volume ls
multinode_test.go:338: (dbg) Run:  kubectl get nodes
multinode_test.go:346: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (7.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:229: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 stop
multinode_test.go:229: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210310005641-1084876 stop: (7.34833564s)
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status
multinode_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status: exit status 7 (165.952755ms)

                                                
                                                
-- stdout --
	multinode-20210310005641-1084876
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	timeToStop: Nonexistent
	
	multinode-20210310005641-1084876-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status --alsologtostderr
multinode_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status --alsologtostderr: exit status 7 (162.258161ms)

                                                
                                                
-- stdout --
	multinode-20210310005641-1084876
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	timeToStop: Nonexistent
	
	multinode-20210310005641-1084876-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 01:00:39.086132 1161425 out.go:239] Setting OutFile to fd 1 ...
	I0310 01:00:39.086699 1161425 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 01:00:39.086720 1161425 out.go:252] Setting ErrFile to fd 2...
	I0310 01:00:39.086727 1161425 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 01:00:39.087014 1161425 root.go:308] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/bin
	I0310 01:00:39.087364 1161425 out.go:246] Setting JSON to false
	I0310 01:00:39.087399 1161425 mustload.go:66] Loading cluster: multinode-20210310005641-1084876
	I0310 01:00:39.088217 1161425 status.go:241] checking status of multinode-20210310005641-1084876 ...
	I0310 01:00:39.088830 1161425 cli_runner.go:115] Run: docker container inspect multinode-20210310005641-1084876 --format={{.State.Status}}
	I0310 01:00:39.134341 1161425 status.go:317] multinode-20210310005641-1084876 host status = "Stopped" (err=<nil>)
	I0310 01:00:39.134367 1161425 status.go:330] host is not running, skipping remaining checks
	I0310 01:00:39.134375 1161425 status.go:243] multinode-20210310005641-1084876 status: &{Name:multinode-20210310005641-1084876 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop:Nonexistent}
	I0310 01:00:39.134411 1161425 status.go:241] checking status of multinode-20210310005641-1084876-m02 ...
	I0310 01:00:39.134740 1161425 cli_runner.go:115] Run: docker container inspect multinode-20210310005641-1084876-m02 --format={{.State.Status}}
	I0310 01:00:39.180008 1161425 status.go:317] multinode-20210310005641-1084876-m02 host status = "Stopped" (err=<nil>)
	I0310 01:00:39.180043 1161425 status.go:330] host is not running, skipping remaining checks
	I0310 01:00:39.180050 1161425 status.go:243] multinode-20210310005641-1084876-m02 status: &{Name:multinode-20210310005641-1084876-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop:Nonexistent}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (7.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (464.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:258: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:268: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210310005641-1084876 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:268: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210310005641-1084876 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (7m43.678131227s)
multinode_test.go:274: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210310005641-1084876 status --alsologtostderr
multinode_test.go:288: (dbg) Run:  kubectl get nodes
multinode_test.go:296: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (464.51s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:356: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210310005641-1084876
multinode_test.go:365: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210310005641-1084876-m02 --driver=docker  --container-runtime=docker
multinode_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210310005641-1084876-m02 --driver=docker  --container-runtime=docker: exit status 14 (136.974773ms)

                                                
                                                
-- stdout --
	* [multinode-20210310005641-1084876-m02] minikube v1.18.1 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube
	  - MINIKUBE_LOCATION=10730
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210310005641-1084876-m02' is duplicated with machine name 'multinode-20210310005641-1084876-m02' in profile 'multinode-20210310005641-1084876'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:373: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210310005641-1084876-m03 --driver=docker  --container-runtime=docker
multinode_test.go:373: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210310005641-1084876-m03 --driver=docker  --container-runtime=docker: (29.601948994s)
multinode_test.go:380: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210310005641-1084876
multinode_test.go:380: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210310005641-1084876: exit status 80 (327.600852ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210310005641-1084876
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210310005641-1084876-m03 already exists in multinode-20210310005641-1084876-m03 profile
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
multinode_test.go:385: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210310005641-1084876-m03
multinode_test.go:385: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210310005641-1084876-m03: (2.917325675s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.06s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (15.13s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:121: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb"
pkg_install_test.go:121: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb": (15.131164953s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (15.13s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (12.73s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:121: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb"
pkg_install_test.go:121: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb": (12.733042648s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (12.73s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (12.76s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:121: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb"
pkg_install_test.go:121: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb": (12.756341115s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (12.76s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (10.26s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:121: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb"
pkg_install_test.go:121: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb": (10.25917304s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (10.26s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (18.54s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:121: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb"
pkg_install_test.go:121: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb": (18.538114531s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (18.54s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (17.36s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:121: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb"
pkg_install_test.go:121: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb": (17.359245224s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (17.36s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (18.29s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:121: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb"
pkg_install_test.go:121: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb": (18.292897854s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (18.29s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (16.1s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:121: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb"
pkg_install_test.go:121: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.18.1-0_amd64.deb": (16.103599864s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (16.10s)

                                                
                                    
x
+
TestPreload (127.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210310011103-1084876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
preload_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210310011103-1084876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m33.374884952s)
preload_test.go:60: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210310011103-1084876 -- docker pull busybox
preload_test.go:60: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210310011103-1084876 -- docker pull busybox: (1.668472957s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210310011103-1084876 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
preload_test.go:70: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210310011103-1084876 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (28.606417715s)
preload_test.go:79: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210310011103-1084876 -- docker images
helpers_test.go:171: Cleaning up "test-preload-20210310011103-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210310011103-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210310011103-1084876: (3.003600586s)
--- PASS: TestPreload (127.02s)

                                                
                                    
x
+
TestScheduledStopUnix (63.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210310011310-1084876 --memory=1900 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210310011310-1084876 --memory=1900 --driver=docker  --container-runtime=docker: (29.834174146s)
scheduled_stop_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210310011310-1084876 --schedule 5m
scheduled_stop_test.go:187: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876
scheduled_stop_test.go:165: signal error was:  <nil>
scheduled_stop_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210310011310-1084876 --schedule 8s
scheduled_stop_test.go:165: signal error was:  os: process already finished
scheduled_stop_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210310011310-1084876 --cancel-scheduled
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876
scheduled_stop_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210310011310-1084876 --schedule 5s
scheduled_stop_test.go:165: signal error was:  os: process already finished
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876: exit status 3 (2.518688241s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 01:14:02.505883 1218275 status.go:363] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44528->127.0.0.1:33552: read: connection reset by peer
	E0310 01:14:02.506108 1218275 status.go:235] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44528->127.0.0.1:33552: read: connection reset by peer

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: status error: exit status 3 (may be ok)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876: exit status 3 (2.491357106s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 01:14:08.453025 1218442 status.go:363] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44824->127.0.0.1:33552: read: connection reset by peer
	E0310 01:14:08.453315 1218442 status.go:235] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44824->127.0.0.1:33552: read: connection reset by peer

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: status error: exit status 3 (may be ok)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876: exit status 7 (125.698793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:172: status error: exit status 7 (may be ok)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210310011310-1084876 -n scheduled-stop-20210310011310-1084876: exit status 7 (118.720925ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
scheduled_stop_test.go:172: status error: exit status 7 (may be ok)
helpers_test.go:171: Cleaning up "scheduled-stop-20210310011310-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210310011310-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210310011310-1084876: (2.227432812s)
--- PASS: TestScheduledStopUnix (63.51s)

                                                
                                    
x
+
TestSkaffold (78.96s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:56: (dbg) Run:  /tmp/skaffold.exe500943000 version
skaffold_test.go:60: skaffold version: v1.20.0
skaffold_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20210310011413-1084876 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20210310011413-1084876 --memory=2600 --driver=docker  --container-runtime=docker: (29.769752996s)
skaffold_test.go:76: copying out/minikube-linux-amd64 to /home/jenkins/workspace/docker_Linux_integration/out/minikube
skaffold_test.go:100: (dbg) Run:  /tmp/skaffold.exe500943000 run --minikube-profile skaffold-20210310011413-1084876 --kube-context skaffold-20210310011413-1084876 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:100: (dbg) Done: /tmp/skaffold.exe500943000 run --minikube-profile skaffold-20210310011413-1084876 --kube-context skaffold-20210310011413-1084876 --status-check=true --port-forward=false --interactive=false: (35.330633034s)
skaffold_test.go:106: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:335: "leeroy-app-c47d646f5-pvwpd" [9307cd98-008a-4dac-b560-c572d2bf82eb] Running
skaffold_test.go:106: (dbg) TestSkaffold: app=leeroy-app healthy within 5.01392782s
skaffold_test.go:109: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:335: "leeroy-web-7c6945574b-98jbr" [e00ac088-033a-4756-9848-ca042b5004f5] Running
skaffold_test.go:109: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006041054s
helpers_test.go:171: Cleaning up "skaffold-20210310011413-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20210310011413-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20210310011413-1084876: (3.131975623s)
--- PASS: TestSkaffold (78.96s)

                                                
                                    
x
+
TestInsufficientStorage (10.94s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20210310011532-1084876 --memory=1900 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:49: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210310011532-1084876 --memory=1900 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.920851251s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210310011532-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"5c42a1de-a029-4495-beb2-3b62118222be","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig"},"datacontenttype":"application/json","id":"b7abcdb6-8ae9-4ae5-a910-c655abed93ae","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"56979168-bc5b-47ce-a149-fc63a95b11c4","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube"},"datacontenttype":"application/json","id":"b45cc511-c288-45e8-8753-d817c1dc31bc","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=10730"},"datacontenttype":"application/json","id":"4331a063-4804-48b4-804b-efad177d1cc0","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"974739c4-fd57-4b33-b1d5-4a5981e3455e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"a1442438-5a21-490f-bc44-6cf768df4498","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"6b37cf85-941e-4212-b332-27c1c9827c13","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"3d5d6861-73de-4648-82e4-9e4a9431b592","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210310011532-1084876 in cluster insufficient-storage-20210310011532-1084876","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"b96d461e-4c37-4ae8-884c-cc336ba0d76a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=1900MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"80f2c33f-2f38-4d0b-932b-11e6cadf3e19","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"44751e08-460b-4e53-9a6f-dab59ec22c69","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:75: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210310011532-1084876 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210310011532-1084876 --output=json --layout=cluster: exit status 7 (327.408364ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210310011532-1084876","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=1900MB) ...","BinaryVersion":"v1.18.1","TimeToStop":"Nonexistent","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210310011532-1084876","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 01:15:40.793109 1226974 status.go:396] kubeconfig endpoint: extract IP: "insufficient-storage-20210310011532-1084876" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig

                                                
                                                
** /stderr **
status_test.go:75: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210310011532-1084876 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210310011532-1084876 --output=json --layout=cluster: exit status 7 (323.484431ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210310011532-1084876","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.18.1","TimeToStop":"Nonexistent","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210310011532-1084876","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 01:15:41.117087 1227033 status.go:396] kubeconfig endpoint: extract IP: "insufficient-storage-20210310011532-1084876" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	E0310 01:15:41.131036 1227033 status.go:540] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube/profiles/insufficient-storage-20210310011532-1084876/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:171: Cleaning up "insufficient-storage-20210310011532-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20210310011532-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210310011532-1084876: (2.368271641s)
--- PASS: TestInsufficientStorage (10.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (100.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:114: (dbg) Run:  /tmp/minikube-v1.9.0.944344158.exe start -p running-upgrade-20210310011819-1084876 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:114: (dbg) Done: /tmp/minikube-v1.9.0.944344158.exe start -p running-upgrade-20210310011819-1084876 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m0.183461947s)
version_upgrade_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210310011819-1084876 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20210310011819-1084876 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.347749125s)
helpers_test.go:171: Cleaning up "running-upgrade-20210310011819-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210310011819-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210310011819-1084876: (2.968814802s)
--- PASS: TestRunningBinaryUpgrade (100.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (220.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210310011543-1084876 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210310011543-1084876 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (58.69315756s)
version_upgrade_test.go:223: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210310011543-1084876

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:223: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210310011543-1084876: (11.311801967s)
version_upgrade_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210310011543-1084876 status --format={{.Host}}
version_upgrade_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210310011543-1084876 status --format={{.Host}}: exit status 7 (135.237107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:230: status error: exit status 7 (may be ok)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210310011543-1084876 --memory=2200 --kubernetes-version=v1.20.5-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210310011543-1084876 --memory=2200 --kubernetes-version=v1.20.5-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m15.237029382s)
version_upgrade_test.go:244: (dbg) Run:  kubectl --context kubernetes-upgrade-20210310011543-1084876 version --output=json
version_upgrade_test.go:263: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:265: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210310011543-1084876 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:265: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210310011543-1084876 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=docker: exit status 106 (155.395948ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210310011543-1084876] minikube v1.18.1 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-10730-1080996-3063e9e720f8ac1d763b520e496d37888b9d0281/.minikube
	  - MINIKUBE_LOCATION=10730
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.20.5-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210310011543-1084876
	    minikube start -p kubernetes-upgrade-20210310011543-1084876 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210310011543-10848762 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.20.5-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210310011543-1084876 --kubernetes-version=v1.20.5-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:269: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210310011543-1084876 --memory=2200 --kubernetes-version=v1.20.5-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:271: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210310011543-1084876 --memory=2200 --kubernetes-version=v1.20.5-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m11.298754526s)
helpers_test.go:171: Cleaning up "kubernetes-upgrade-20210310011543-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210310011543-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210310011543-1084876: (3.798351119s)
--- PASS: TestKubernetesUpgrade (220.72s)

                                                
                                    
x
+
TestMissingContainerUpgrade (126.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:305: (dbg) Run:  /tmp/minikube-v1.9.1.081048635.exe start -p missing-upgrade-20210310011701-1084876 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:305: (dbg) Done: /tmp/minikube-v1.9.1.081048635.exe start -p missing-upgrade-20210310011701-1084876 --memory=2200 --driver=docker  --container-runtime=docker: (58.715637022s)
version_upgrade_test.go:314: (dbg) Run:  docker stop missing-upgrade-20210310011701-1084876
version_upgrade_test.go:314: (dbg) Done: docker stop missing-upgrade-20210310011701-1084876: (1.934540571s)
version_upgrade_test.go:319: (dbg) Run:  docker rm missing-upgrade-20210310011701-1084876
version_upgrade_test.go:325: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20210310011701-1084876 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210310011701-1084876 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.528765835s)
helpers_test.go:171: Cleaning up "missing-upgrade-20210310011701-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20210310011701-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210310011701-1084876: (11.35404071s)
--- PASS: TestMissingContainerUpgrade (126.99s)

                                                
                                    
x
+
TestPause/serial/Start (143.95s)

                                                
                                                
=== RUN   TestPause/serial/Start

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210310011543-1084876 --memory=1800 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210310011543-1084876 --memory=1800 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (2m23.953454213s)
--- PASS: TestPause/serial/Start (143.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (4.87s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:87: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210310011543-1084876 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:87: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210310011543-1084876 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4.859477042s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (4.87s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:104: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210310011543-1084876 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:75: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210310011543-1084876 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210310011543-1084876 --output=json --layout=cluster: exit status 2 (536.552434ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210310011543-1084876","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 13 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.18.1","TimeToStop":"Nonexistent","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210310011543-1084876","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.54s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:114: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210310011543-1084876 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:104: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210310011543-1084876 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.17s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210310011543-1084876 --alsologtostderr -v=5
pause_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210310011543-1084876 --alsologtostderr -v=5: (3.174244451s)
--- PASS: TestPause/serial/DeletePaused (3.17s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.63s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:134: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:160: (dbg) Run:  docker ps -a
pause_test.go:165: (dbg) Run:  docker volume inspect pause-20210310011543-1084876
pause_test.go:165: (dbg) Non-zero exit: docker volume inspect pause-20210310011543-1084876: exit status 1 (49.513999ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210310011543-1084876

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (150.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210310012000-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210310012000-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (2m30.456922408s)
--- PASS: TestNetworkPlugins/group/auto/Start (150.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (135.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210310012020-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p false-20210310012020-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (2m15.094532569s)
--- PASS: TestNetworkPlugins/group/false/Start (135.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (135.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210310012023-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210310012023-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (2m15.356027267s)
--- PASS: TestNetworkPlugins/group/cilium/Start (135.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (7.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:202: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20210310011908-1084876
version_upgrade_test.go:202: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20210310011908-1084876: (7.265275492s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (7.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (139.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210310012107-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210310012107-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: (2m19.096229336s)
--- PASS: TestNetworkPlugins/group/calico/Start (139.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210310012000-1084876 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context auto-20210310012000-1084876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:124: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-hrmh9" [97cc082b-7e5a-42b9-99b1-cbcbaae6cd45] Pending
helpers_test.go:335: "netcat-66fbc655d5-hrmh9" [97cc082b-7e5a-42b9-99b1-cbcbaae6cd45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:335: "netcat-66fbc655d5-hrmh9" [97cc082b-7e5a-42b9-99b1-cbcbaae6cd45] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:124: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.007906071s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20210310012020-1084876 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context false-20210310012020-1084876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:124: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-zq7w7" [9a5aa91d-70ff-4b1d-bbb4-77a3badd5343] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:335: "netcat-66fbc655d5-zq7w7" [9a5aa91d-70ff-4b1d-bbb4-77a3badd5343] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:124: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.006598917s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:335: "cilium-d4r5p" [b802f33c-6622-4a3c-894a-c43b3c845bfd] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.017155543s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:141: (dbg) Run:  kubectl --context auto-20210310012000-1084876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:160: (dbg) Run:  kubectl --context auto-20210310012000-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:173: (dbg) Run:  kubectl --context auto-20210310012000-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:173: (dbg) Non-zero exit: kubectl --context auto-20210310012000-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.193102984s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210310012023-1084876 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context cilium-20210310012023-1084876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:124: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-sh7nz" [40454e5b-469e-40b2-9ec2-b555d0992e41] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:335: "netcat-66fbc655d5-sh7nz" [40454e5b-469e-40b2-9ec2-b555d0992e41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:335: "netcat-66fbc655d5-sh7nz" [40454e5b-469e-40b2-9ec2-b555d0992e41] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:124: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.049266206s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:141: (dbg) Run:  kubectl --context false-20210310012020-1084876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:160: (dbg) Run:  kubectl --context false-20210310012020-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:173: (dbg) Run:  kubectl --context false-20210310012020-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:173: (dbg) Non-zero exit: kubectl --context false-20210310012020-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.270303508s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (138.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210310012250-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210310012250-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: (2m18.941051348s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (138.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:141: (dbg) Run:  kubectl --context cilium-20210310012023-1084876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:160: (dbg) Run:  kubectl --context cilium-20210310012023-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (165.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210310012255-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210310012255-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (2m45.219887781s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (165.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:173: (dbg) Run:  kubectl --context cilium-20210310012023-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (144.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210310012259-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210310012259-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (2m24.646230001s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (144.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:335: "calico-node-b4pgf" [6629f79d-fb54-4da8-b67f-5d155c94bc05] Running
net_test.go:88: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.019658532s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210310012107-1084876 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context calico-20210310012107-1084876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:124: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-m5h48" [de348c15-7efd-494f-bef0-5ceec0c30793] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-m5h48" [de348c15-7efd-494f-bef0-5ceec0c30793] Running
net_test.go:124: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.018467226s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:141: (dbg) Run:  kubectl --context calico-20210310012107-1084876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:160: (dbg) Run:  kubectl --context calico-20210310012107-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:173: (dbg) Run:  kubectl --context calico-20210310012107-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (129.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210310012348-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210310012348-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (2m9.991727875s)
--- PASS: TestNetworkPlugins/group/bridge/Start (129.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210310012250-1084876 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (8.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context custom-weave-20210310012250-1084876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:124: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-h544m" [0eb15c4b-31d9-4f32-8bd0-99dec86b3f0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-h544m" [0eb15c4b-31d9-4f32-8bd0-99dec86b3f0c] Running
net_test.go:124: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 8.005931845s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (8.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (119.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20210310012521-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20210310012521-1084876 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m59.669254204s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (119.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:335: "kindnet-rdwxb" [b27c4ecd-b33b-4349-b754-1b18b8197b16] Running
net_test.go:88: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.021628071s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210310012259-1084876 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context kindnet-20210310012259-1084876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:124: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-6vlnx" [21d0a283-5f20-440d-94f4-f39d53e9a730] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-6vlnx" [21d0a283-5f20-440d-94f4-f39d53e9a730] Running
net_test.go:124: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00845877s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:141: (dbg) Run:  kubectl --context kindnet-20210310012259-1084876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:160: (dbg) Run:  kubectl --context kindnet-20210310012259-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:173: (dbg) Run:  kubectl --context kindnet-20210310012259-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210310012255-1084876 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context enable-default-cni-20210310012255-1084876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:124: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-kkr6k" [d55b47f5-d885-4204-af33-227e932ea85d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:335: "netcat-66fbc655d5-kkr6k" [d55b47f5-d885-4204-af33-227e932ea85d] Running
net_test.go:124: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.006688605s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210310012543-1084876 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210310012543-1084876 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0: (2m33.878688918s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:141: (dbg) Run:  kubectl --context enable-default-cni-20210310012255-1084876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:160: (dbg) Run:  kubectl --context enable-default-cni-20210310012255-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:173: (dbg) Run:  kubectl --context enable-default-cni-20210310012255-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (231.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210310012556-1084876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.5-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210310012556-1084876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.5-rc.0: (3m51.898397429s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (231.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210310012348-1084876 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context bridge-20210310012348-1084876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:124: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-tk8mz" [2617764f-1c25-4342-abbe-da302944721d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-tk8mz" [2617764f-1c25-4342-abbe-da302944721d] Running
net_test.go:124: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.013288534s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:141: (dbg) Run:  kubectl --context bridge-20210310012348-1084876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:160: (dbg) Run:  kubectl --context bridge-20210310012348-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:173: (dbg) Run:  kubectl --context bridge-20210310012348-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (138.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210310012623-1084876 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210310012623-1084876 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.2: (2m18.563668682s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (138.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20210310012521-1084876 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context kubenet-20210310012521-1084876 replace --force -f testdata/netcat-deployment.yaml
net_test.go:124: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-fdfs7" [83903635-21be-48d0-bf7e-b146086ea800] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-fdfs7" [83903635-21be-48d0-bf7e-b146086ea800] Running
net_test.go:124: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.006517825s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:141: (dbg) Run:  kubectl --context kubenet-20210310012521-1084876 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:160: (dbg) Run:  kubectl --context kubenet-20210310012521-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:173: (dbg) Run:  kubectl --context kubenet-20210310012521-1084876 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (117.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210310012734-1084876 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.5-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210310012734-1084876 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.5-rc.0: (1m57.912808229s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (117.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context old-k8s-version-20210310012543-1084876 create -f testdata/busybox.yaml
start_stop_delete_test.go:164: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [e460a603-813f-11eb-b74b-0242edf3af20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [e460a603-813f-11eb-b74b-0242edf3af20] Running
start_stop_delete_test.go:164: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.015074317s
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context old-k8s-version-20210310012543-1084876 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210310012543-1084876 --alsologtostderr -v=3
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210310012543-1084876 --alsologtostderr -v=3: (11.212777601s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210310012543-1084876 -n old-k8s-version-20210310012543-1084876
start_stop_delete_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210310012543-1084876 -n old-k8s-version-20210310012543-1084876: exit status 7 (133.818186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:180: status error: exit status 7 (may be ok)
start_stop_delete_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210310012543-1084876
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (71.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210310012543-1084876 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210310012543-1084876 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0: (1m11.2012928s)
start_stop_delete_test.go:202: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210310012543-1084876 -n old-k8s-version-20210310012543-1084876
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (71.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context default-k8s-different-port-20210310012623-1084876 create -f testdata/busybox.yaml
start_stop_delete_test.go:164: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [a7a045c6-30eb-42cb-8996-445f5f2a1ed2] Pending
helpers_test.go:335: "busybox" [a7a045c6-30eb-42cb-8996-445f5f2a1ed2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [a7a045c6-30eb-42cb-8996-445f5f2a1ed2] Running
start_stop_delete_test.go:164: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.016439242s
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context default-k8s-different-port-20210310012623-1084876 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210310012623-1084876 --alsologtostderr -v=3
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210310012623-1084876 --alsologtostderr -v=3: (11.253028994s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (11.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210310012623-1084876 -n default-k8s-different-port-20210310012623-1084876
start_stop_delete_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210310012623-1084876 -n default-k8s-different-port-20210310012623-1084876: exit status 7 (128.335947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:180: status error: exit status 7 (may be ok)
start_stop_delete_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210310012623-1084876
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (92.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210310012623-1084876 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210310012623-1084876 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.2: (1m31.634784046s)
start_stop_delete_test.go:202: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210310012623-1084876 -n default-k8s-different-port-20210310012623-1084876
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (92.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210310012734-1084876 --alsologtostderr -v=3
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210310012734-1084876 --alsologtostderr -v=3: (1.508938163s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210310012734-1084876 -n newest-cni-20210310012734-1084876
start_stop_delete_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210310012734-1084876 -n newest-cni-20210310012734-1084876: exit status 7 (131.956005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:180: status error: exit status 7 (may be ok)
start_stop_delete_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210310012734-1084876
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (84.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210310012734-1084876 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.5-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210310012734-1084876 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.5-rc.0: (1m23.73602097s)
start_stop_delete_test.go:202: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210310012734-1084876 -n newest-cni-20210310012734-1084876
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (84.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context no-preload-20210310012556-1084876 create -f testdata/busybox.yaml
start_stop_delete_test.go:164: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [4c91539f-1aa4-46f0-b4df-c6deabdfefe6] Pending
helpers_test.go:335: "busybox" [4c91539f-1aa4-46f0-b4df-c6deabdfefe6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:335: "busybox" [4c91539f-1aa4-46f0-b4df-c6deabdfefe6] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:164: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.018807022s
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context no-preload-20210310012556-1084876 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:214: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-5d8978d65d-vl8k4" [041e6294-8140-11eb-807e-0242fa99a067] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:214: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013891109s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:225: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-5d8978d65d-vl8k4" [041e6294-8140-11eb-807e-0242fa99a067] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:225: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006455068s
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210310012556-1084876 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210310012556-1084876 --alsologtostderr -v=3: (11.18328097s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210310012543-1084876 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20210310004806-1084876
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210310012543-1084876 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210310012543-1084876 -n old-k8s-version-20210310012543-1084876
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210310012543-1084876 -n old-k8s-version-20210310012543-1084876: exit status 2 (387.273019ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210310012543-1084876 -n old-k8s-version-20210310012543-1084876
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210310012543-1084876 -n old-k8s-version-20210310012543-1084876: exit status 2 (396.996887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20210310012543-1084876 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210310012543-1084876 -n old-k8s-version-20210310012543-1084876
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210310012543-1084876 -n old-k8s-version-20210310012543-1084876
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (132.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210310013007-1084876 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210310013007-1084876 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.2: (2m12.15297046s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (132.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210310012556-1084876 -n no-preload-20210310012556-1084876
start_stop_delete_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210310012556-1084876 -n no-preload-20210310012556-1084876: exit status 7 (132.310959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:180: status error: exit status 7 (may be ok)
start_stop_delete_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210310012556-1084876
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (80.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210310012556-1084876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.5-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210310012556-1084876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.5-rc.0: (1m20.290315438s)
start_stop_delete_test.go:202: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210310012556-1084876 -n no-preload-20210310012556-1084876
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (80.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:214: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-r6s7h" [71933820-e4be-4694-9649-b708837a2d8f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:335: "kubernetes-dashboard-968bcb79-r6s7h" [71933820-e4be-4694-9649-b708837a2d8f] Running
start_stop_delete_test.go:214: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.014494925s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:225: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-r6s7h" [71933820-e4be-4694-9649-b708837a2d8f] Running
start_stop_delete_test.go:225: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007320339s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210310012623-1084876 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20210310004806-1084876
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210310012623-1084876 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210310012623-1084876 -n default-k8s-different-port-20210310012623-1084876
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210310012623-1084876 -n default-k8s-different-port-20210310012623-1084876: exit status 2 (409.556355ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210310012623-1084876 -n default-k8s-different-port-20210310012623-1084876
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210310012623-1084876 -n default-k8s-different-port-20210310012623-1084876: exit status 2 (429.654622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20210310012623-1084876 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210310012623-1084876 -n default-k8s-different-port-20210310012623-1084876
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210310012623-1084876 -n default-k8s-different-port-20210310012623-1084876
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210310012734-1084876 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20210310004806-1084876
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210310012734-1084876 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210310012734-1084876 -n newest-cni-20210310012734-1084876
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210310012734-1084876 -n newest-cni-20210310012734-1084876: exit status 2 (383.388212ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210310012734-1084876 -n newest-cni-20210310012734-1084876
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210310012734-1084876 -n newest-cni-20210310012734-1084876: exit status 2 (379.725131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20210310012734-1084876 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210310012734-1084876 -n newest-cni-20210310012734-1084876
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210310012734-1084876 -n newest-cni-20210310012734-1084876
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (29.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:214: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-bxgp4" [7744c2b1-5ed3-413f-91c0-ecde0f24c288] Pending
helpers_test.go:335: "kubernetes-dashboard-968bcb79-bxgp4" [7744c2b1-5ed3-413f-91c0-ecde0f24c288] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:335: "kubernetes-dashboard-968bcb79-bxgp4" [7744c2b1-5ed3-413f-91c0-ecde0f24c288] Running
start_stop_delete_test.go:214: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 29.019154842s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (29.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:225: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-bxgp4" [7744c2b1-5ed3-413f-91c0-ecde0f24c288] Running
start_stop_delete_test.go:225: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007444986s
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210310012556-1084876 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20210310004806-1084876
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210310012556-1084876 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210310012556-1084876 -n no-preload-20210310012556-1084876
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210310012556-1084876 -n no-preload-20210310012556-1084876: exit status 2 (380.017921ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210310012556-1084876 -n no-preload-20210310012556-1084876
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210310012556-1084876 -n no-preload-20210310012556-1084876: exit status 2 (390.977792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20210310012556-1084876 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210310012556-1084876 -n no-preload-20210310012556-1084876
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210310012556-1084876 -n no-preload-20210310012556-1084876
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context embed-certs-20210310013007-1084876 create -f testdata/busybox.yaml
start_stop_delete_test.go:164: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [f7f15724-ec3c-41fa-856d-05458235861b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [f7f15724-ec3c-41fa-856d-05458235861b] Running
start_stop_delete_test.go:164: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.015592038s
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context embed-certs-20210310013007-1084876 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210310013007-1084876 --alsologtostderr -v=3
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210310013007-1084876 --alsologtostderr -v=3: (11.12592167s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210310013007-1084876 -n embed-certs-20210310013007-1084876
start_stop_delete_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210310013007-1084876 -n embed-certs-20210310013007-1084876: exit status 7 (120.816414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:180: status error: exit status 7 (may be ok)
start_stop_delete_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210310013007-1084876
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (101.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210310013007-1084876 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.2
start_stop_delete_test.go:196: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210310013007-1084876 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.2: (1m41.131197761s)
start_stop_delete_test.go:202: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210310013007-1084876 -n embed-certs-20210310013007-1084876
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (101.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:214: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-bk92s" [2cd7b08e-9f98-4aa6-92db-05da3cb0196f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:335: "kubernetes-dashboard-968bcb79-bk92s" [2cd7b08e-9f98-4aa6-92db-05da3cb0196f] Running
start_stop_delete_test.go:214: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.013273755s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:225: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-bk92s" [2cd7b08e-9f98-4aa6-92db-05da3cb0196f] Running
start_stop_delete_test.go:225: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006290462s
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210310013007-1084876 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20210310004806-1084876
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210310013007-1084876 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210310013007-1084876 -n embed-certs-20210310013007-1084876
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210310013007-1084876 -n embed-certs-20210310013007-1084876: exit status 2 (372.139378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210310013007-1084876 -n embed-certs-20210310013007-1084876
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210310013007-1084876 -n embed-certs-20210310013007-1084876: exit status 2 (377.630413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20210310013007-1084876 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210310013007-1084876 -n embed-certs-20210310013007-1084876
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210310013007-1084876 -n embed-certs-20210310013007-1084876
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    

Test skip (17/241)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:116: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:148: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/cached-images
aaa_download_only_test.go:116: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/kubectl
aaa_download_only_test.go:148: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.5-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.5-rc.0/cached-images
aaa_download_only_test.go:116: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.5-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.5-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.5-rc.0/kubectl
aaa_download_only_test.go:148: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.5-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:399: Skipping olm test till this timeout issue is solved https://github.com/operator-framework/operator-lifecycle-manager/issues/1534#issuecomment-632342257
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:186: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:33: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:66: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
--- SKIP: TestNetworkPlugins/group/flannel (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:89: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:171: Cleaning up "disable-driver-mounts-20210310012554-1084876" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210310012554-1084876
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p disable-driver-mounts-20210310012554-1084876: (1.601950499s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.60s)

                                                
                                    
Copied to clipboard