Test Report: Docker_Windows 12230

                    
                      b85c4fe0fcec6d00161b49ecbfd8182c89122b1a:2021-08-17:20050
                    
                

Test fail (12/249)

x
+
TestAddons/parallel/GCPAuth (43.67s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210816231050-111344 create -f testdata\busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [b974b830-815d-4ba7-830f-9f8fa2564e35] Pending
helpers_test.go:343: "busybox" [b974b830-815d-4ba7-830f-9f8fa2564e35] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [b974b830-815d-4ba7-830f-9f8fa2564e35] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 13.1474095s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210816231050-111344 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:630: (dbg) Non-zero exit: kubectl --context addons-20210816231050-111344 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS": exit status 1 (1.2303967s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
addons_test.go:632: printenv creds: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/GCPAuth]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210816231050-111344

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:236: (dbg) docker inspect addons-20210816231050-111344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1512aaa3106cbd6cca15d8159b645302d8426a227d2ba9d0478002ec3bf941bb",
	        "Created": "2021-08-16T23:11:59.9014967Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2657,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-16T23:12:00.8338739Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/1512aaa3106cbd6cca15d8159b645302d8426a227d2ba9d0478002ec3bf941bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1512aaa3106cbd6cca15d8159b645302d8426a227d2ba9d0478002ec3bf941bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/1512aaa3106cbd6cca15d8159b645302d8426a227d2ba9d0478002ec3bf941bb/hosts",
	        "LogPath": "/var/lib/docker/containers/1512aaa3106cbd6cca15d8159b645302d8426a227d2ba9d0478002ec3bf941bb/1512aaa3106cbd6cca15d8159b645302d8426a227d2ba9d0478002ec3bf941bb-json.log",
	        "Name": "/addons-20210816231050-111344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210816231050-111344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210816231050-111344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b1326e7f20108dc8b5610c9c2857c2aa1fd0a902dceebfc089073940c7395c36-init/diff:/var/lib/docker/overlay2/e167e57d4b442602b2435f5ffd2147b1da53de34df49d96ce69565867fcf3850/diff:/var/lib/docker/overlay2/dbfef15a73962254d5bcc2c91a409021fc3573c3135096621d707c6f4feaac7d/diff:/var/lib/docker/overlay2/7fc44848dc580276135d9db2b62ce047cfba1909de5e91acbe8c1a5fc8fb3649/diff:/var/lib/docker/overlay2/493996ff2d6a75ef70db2749dded6936397fe536c32e28dda979b8af93e19f13/diff:/var/lib/docker/overlay2/b862553905dec6f42a41351a012fdce386251d97160f74f6b1feb3b455e1f53a/diff:/var/lib/docker/overlay2/517a8b2830d9e81ff950c8305063a6681219abbb7b22f3a87587fa819a0728ed/diff:/var/lib/docker/overlay2/f2b268080cfd9bbb64731ea6b7cb2ec64077e6c2701c2ab6e8b358a541056c5d/diff:/var/lib/docker/overlay2/ee5e612696333c681900cad605a1f678e9114e9c7ecf70717fad21aea1e52992/diff:/var/lib/docker/overlay2/6f44289af0b09a02645c237aabeff61487c57040b9531c0f7bd97517308bfd57/diff:/var/lib/docker/overlay2/f98f67
21a411bacf9d310d4d4405fbd528fa90d60af5ffabda9d55cef9ef3033/diff:/var/lib/docker/overlay2/8bc2e0f6b7c2aeccc6a944f316dbac5672f8685cc5dd5d3c2fc4bd370db4949f/diff:/var/lib/docker/overlay2/ef9e793c1e243004ff088f210369994837eb19a8abd21cf93f75257155445f16/diff:/var/lib/docker/overlay2/48fa7f37fc37f8220a31f4294bc800ef7a33c53c10bdc23d7dc68f27cfe4e535/diff:/var/lib/docker/overlay2/54bc5e0e0c32fdc66ce3eeb345721201a63a0c878d4665607246cd4aa5af61e5/diff:/var/lib/docker/overlay2/398c3fc63254fcc564086ced0eb7211f2d474f8bbdcd43ee27fd609e767c44a6/diff:/var/lib/docker/overlay2/796acb5b93384da004a8065a332cbb07c952569bdd7bb5e551b218e4c5c61f73/diff:/var/lib/docker/overlay2/d90baef87ad95bdfb14a2f35e4cb62336e18c21eb934266f43bfbe017252b857/diff:/var/lib/docker/overlay2/c16752decc8ef06fce4eebdf4ff4725414f3aa80cccd7b3ffdc325095930c0b4/diff:/var/lib/docker/overlay2/a679084eec181b0e1408e573d1ac08c47af1fd8266eb5884bf1a38d5ba0ddbbc/diff:/var/lib/docker/overlay2/15becb79b0d40211562ae33ddc5ec776276b9ae42c8a9f4645dcc6442b36f771/diff:/var/lib/d
ocker/overlay2/068a9a5dce1094eb72788237bd9cda4c76345774d5e647f0af81302a75861f4a/diff:/var/lib/docker/overlay2/74b9e9d807e09734ee96c76bc67adc56c9e3286b39315f89f6747c8c917ad2e5/diff:/var/lib/docker/overlay2/75de8e4895a0b4efe563705c06184db384b5c40154856b9bca2106a8d59fc151/diff:/var/lib/docker/overlay2/cbca3c40b21fee2ef276744168492f17203934aca8de4b459edae2fa55b6bb02/diff:/var/lib/docker/overlay2/584d28a6308bb998bd89d7d92c45b57b9dd66de472d166972d2f5195afd9dd44/diff:/var/lib/docker/overlay2/9c722118749c036eb2d00ba5a6925c5f32b121d64974c99e2de552b26a8bb7cd/diff:/var/lib/docker/overlay2/24908c792743f57c182587c66263f074ed86ae7c5812c631dea82d8ec6650e81/diff:/var/lib/docker/overlay2/9a8de59bfb816b3fc2f0fd522ef966196534483b5e87aafd180dd8b07e9c3582/diff:/var/lib/docker/overlay2/df46d170084213da519dea7e0f402d51272dc10df4d7cd7f37c528c411ac7000/diff:/var/lib/docker/overlay2/36b86a6f515e5882426e598755bb77d43cc340fd20798dfd0a810cd2ab96eeb6/diff:/var/lib/docker/overlay2/b54ac02f70047359cd143a32f862d18498cb556877ccfd252defb9d17fc
9d9f5/diff:/var/lib/docker/overlay2/971b77d080920997e1d0d0936f312a9a322ccd6ab9920c83a8eb5d14b93c3849/diff:/var/lib/docker/overlay2/5b5c21ae360c7e0738c0048bc3fe8d7d3cc0640d266660121f3968f675f42063/diff:/var/lib/docker/overlay2/e07bf2561a99ba47435b8f84b267268e02e9e4ff47832bd5054ee28bb1ca5001/diff:/var/lib/docker/overlay2/0c560be48f01814af21ec54fc79ea5e8db28f05e967a17b331be28ad61c75483/diff:/var/lib/docker/overlay2/27930667f3fd0fd38c13a39c0590c03a2c3b3ba04f0a3c946167be6a40f50c46/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b1326e7f20108dc8b5610c9c2857c2aa1fd0a902dceebfc089073940c7395c36/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b1326e7f20108dc8b5610c9c2857c2aa1fd0a902dceebfc089073940c7395c36/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b1326e7f20108dc8b5610c9c2857c2aa1fd0a902dceebfc089073940c7395c36/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-20210816231050-111344",
	                "Source": "/var/lib/docker/volumes/addons-20210816231050-111344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210816231050-111344",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210816231050-111344",
	                "name.minikube.sigs.k8s.io": "addons-20210816231050-111344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fcbbb1f2d49c3db09c9cec4fb59bf578564913adcf9f5add23f90b8a228db4df",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55004"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55003"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55000"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55002"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55001"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fcbbb1f2d49c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210816231050-111344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1512aaa3106c",
	                        "addons-20210816231050-111344"
	                    ],
	                    "NetworkID": "7774a99ddca11d38a9d22ce063214203271e31dd3043b5a72f7401b25520e57e",
	                    "EndpointID": "ced691352953d371dcadbcd07c1275943e5ab0292ef3ff8a9f7d65cc6dee966e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-20210816231050-111344 -n addons-20210816231050-111344

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-20210816231050-111344 -n addons-20210816231050-111344: (4.980079s)
helpers_test.go:245: <<< TestAddons/parallel/GCPAuth FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/GCPAuth]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210816231050-111344 logs -n 25

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210816231050-111344 logs -n 25: (16.3186723s)
helpers_test.go:253: TestAddons/parallel/GCPAuth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                  |                Profile                |          User           | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------|---------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                 | download-only-20210816230902-111344   | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:09:56 GMT | Mon, 16 Aug 2021 23:10:01 GMT |
	| delete  | -p                                    | download-only-20210816230902-111344   | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:10:01 GMT | Mon, 16 Aug 2021 23:10:04 GMT |
	|         | download-only-20210816230902-111344   |                                       |                         |         |                               |                               |
	| delete  | -p                                    | download-only-20210816230902-111344   | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:10:04 GMT | Mon, 16 Aug 2021 23:10:08 GMT |
	|         | download-only-20210816230902-111344   |                                       |                         |         |                               |                               |
	| delete  | -p                                    | download-docker-20210816231008-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:10:46 GMT | Mon, 16 Aug 2021 23:10:50 GMT |
	|         | download-docker-20210816231008-111344 |                                       |                         |         |                               |                               |
	| start   | -p                                    | addons-20210816231050-111344          | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:10:51 GMT | Mon, 16 Aug 2021 23:16:58 GMT |
	|         | addons-20210816231050-111344          |                                       |                         |         |                               |                               |
	|         | --wait=true --memory=4000             |                                       |                         |         |                               |                               |
	|         | --alsologtostderr                     |                                       |                         |         |                               |                               |
	|         | --addons=registry                     |                                       |                         |         |                               |                               |
	|         | --addons=metrics-server               |                                       |                         |         |                               |                               |
	|         | --addons=olm                          |                                       |                         |         |                               |                               |
	|         | --addons=volumesnapshots              |                                       |                         |         |                               |                               |
	|         | --addons=csi-hostpath-driver          |                                       |                         |         |                               |                               |
	|         | --driver=docker                       |                                       |                         |         |                               |                               |
	|         | --addons=ingress                      |                                       |                         |         |                               |                               |
	|         | --addons=helm-tiller                  |                                       |                         |         |                               |                               |
	| -p      | addons-20210816231050-111344          | addons-20210816231050-111344          | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:17:06 GMT | Mon, 16 Aug 2021 23:17:10 GMT |
	|         | addons disable metrics-server         |                                       |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |                         |         |                               |                               |
	|---------|---------------------------------------|---------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 23:10:51
	Running on machine: windows-server-2
	Binary: Built with gc go1.16.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 23:10:51.035915   91500 out.go:298] Setting OutFile to fd 936 ...
	I0816 23:10:51.037632   91500 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:10:51.037632   91500 out.go:311] Setting ErrFile to fd 940...
	I0816 23:10:51.037632   91500 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:10:51.053238   91500 out.go:305] Setting JSON to false
	I0816 23:10:51.056474   91500 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8363498,"bootTime":1620791953,"procs":142,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0816 23:10:51.056474   91500 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0816 23:10:51.060226   91500 out.go:177] * [addons-20210816231050-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0816 23:10:51.060670   91500 notify.go:169] Checking for updates...
	I0816 23:10:51.062408   91500 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0816 23:10:51.064077   91500 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0816 23:10:51.065762   91500 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 23:10:51.066335   91500 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 23:10:52.731099   91500 docker.go:132] docker version: linux-20.10.2
	I0816 23:10:52.740458   91500 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:10:53.392534   91500 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 23:10:53.0981937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:10:53.396083   91500 out.go:177] * Using the docker driver based on user configuration
	I0816 23:10:53.396278   91500 start.go:278] selected driver: docker
	I0816 23:10:53.396278   91500 start.go:751] validating driver "docker" against <nil>
	I0816 23:10:53.396278   91500 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 23:10:53.461045   91500 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:10:54.108281   91500 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 23:10:53.8175189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:10:54.108490   91500 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0816 23:10:54.109129   91500 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 23:10:54.109129   91500 cni.go:93] Creating CNI manager for ""
	I0816 23:10:54.109129   91500 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0816 23:10:54.109342   91500 start_flags.go:277] config:
	{Name:addons-20210816231050-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210816231050-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 23:10:54.111692   91500 out.go:177] * Starting control plane node addons-20210816231050-111344 in cluster addons-20210816231050-111344
	I0816 23:10:54.111929   91500 cache.go:117] Beginning downloading kic base image for docker with docker
	I0816 23:10:54.113628   91500 out.go:177] * Pulling base image ...
	I0816 23:10:54.113905   91500 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0816 23:10:54.113905   91500 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 23:10:54.114183   91500 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0816 23:10:54.114183   91500 cache.go:56] Caching tarball of preloaded images
	I0816 23:10:54.114832   91500 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 23:10:54.115094   91500 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0816 23:10:54.115757   91500 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\config.json ...
	I0816 23:10:54.115995   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\config.json: {Name:mkd26b5f46d370bde24d24a7e40d32241a1ffc3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:10:54.559734   91500 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 to local cache
	I0816 23:10:54.560049   91500 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.25-1628619379-12032@sha256_937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar
	I0816 23:10:54.560532   91500 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.25-1628619379-12032@sha256_937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar
	I0816 23:10:54.560798   91500 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory
	I0816 23:10:54.561070   91500 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory, skipping pull
	I0816 23:10:54.561070   91500 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in cache, skipping pull
	I0816 23:10:54.561387   91500 cache.go:148] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 as a tarball
	I0816 23:10:54.561387   91500 cache.go:159] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 from local cache
	I0816 23:10:54.561640   91500 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.25-1628619379-12032@sha256_937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar
	I0816 23:11:48.349223   91500 cache.go:162] successfully loaded gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 from cached tarball
	I0816 23:11:48.349464   91500 cache.go:205] Successfully downloaded all kic artifacts
	I0816 23:11:48.349916   91500 start.go:313] acquiring machines lock for addons-20210816231050-111344: {Name:mk66463a9b79cee6566266c22618e1e35d432357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 23:11:48.350496   91500 start.go:317] acquired machines lock for "addons-20210816231050-111344" in 580.4µs
	I0816 23:11:48.350929   91500 start.go:89] Provisioning new machine with config: &{Name:addons-20210816231050-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210816231050-111344 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 23:11:48.351137   91500 start.go:126] createHost starting for "" (driver="docker")
	I0816 23:11:48.358985   91500 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0816 23:11:48.360431   91500 start.go:160] libmachine.API.Create for "addons-20210816231050-111344" (driver="docker")
	I0816 23:11:48.360593   91500 client.go:168] LocalClient.Create starting
	I0816 23:11:48.364040   91500 main.go:130] libmachine: Creating CA: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0816 23:11:48.726850   91500 main.go:130] libmachine: Creating client certificate: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0816 23:11:49.122218   91500 cli_runner.go:115] Run: docker network inspect addons-20210816231050-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 23:11:49.563410   91500 cli_runner.go:162] docker network inspect addons-20210816231050-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 23:11:49.570405   91500 network_create.go:255] running [docker network inspect addons-20210816231050-111344] to gather additional debugging logs...
	I0816 23:11:49.570405   91500 cli_runner.go:115] Run: docker network inspect addons-20210816231050-111344
	W0816 23:11:49.992959   91500 cli_runner.go:162] docker network inspect addons-20210816231050-111344 returned with exit code 1
	I0816 23:11:49.993223   91500 network_create.go:258] error running [docker network inspect addons-20210816231050-111344]: docker network inspect addons-20210816231050-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210816231050-111344
	I0816 23:11:49.993223   91500 network_create.go:260] output of [docker network inspect addons-20210816231050-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210816231050-111344
	
	** /stderr **
	I0816 23:11:49.999670   91500 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 23:11:50.429273   91500 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001581e8] misses:0}
	I0816 23:11:50.429273   91500 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 23:11:50.429273   91500 network_create.go:106] attempt to create docker network addons-20210816231050-111344 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0816 23:11:50.435333   91500 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210816231050-111344
	I0816 23:11:50.966636   91500 network_create.go:90] docker network addons-20210816231050-111344 192.168.49.0/24 created
	I0816 23:11:50.966636   91500 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210816231050-111344" container
	I0816 23:11:50.978895   91500 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0816 23:11:51.385397   91500 cli_runner.go:115] Run: docker volume create addons-20210816231050-111344 --label name.minikube.sigs.k8s.io=addons-20210816231050-111344 --label created_by.minikube.sigs.k8s.io=true
	I0816 23:11:51.780893   91500 oci.go:102] Successfully created a docker volume addons-20210816231050-111344
	I0816 23:11:51.788629   91500 cli_runner.go:115] Run: docker run --rm --name addons-20210816231050-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210816231050-111344 --entrypoint /usr/bin/test -v addons-20210816231050-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0816 23:11:58.130337   91500 cli_runner.go:168] Completed: docker run --rm --name addons-20210816231050-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210816231050-111344 --entrypoint /usr/bin/test -v addons-20210816231050-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (6.3414666s)
	I0816 23:11:58.130650   91500 oci.go:106] Successfully prepared a docker volume addons-20210816231050-111344
	I0816 23:11:58.130650   91500 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0816 23:11:58.131040   91500 kic.go:179] Starting extracting preloaded images to volume ...
	I0816 23:11:58.138644   91500 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210816231050-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 23:11:58.138875   91500 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:11:58.854060   91500 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:46 SystemTime:2021-08-16 23:11:58.5508966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:11:58.863544   91500 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	W0816 23:11:58.951978   91500 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210816231050-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0816 23:11:58.952249   91500 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210816231050-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ������������������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	����   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	��������8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>�����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0816 23:11:59.522907   91500 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210816231050-111344 --name addons-20210816231050-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210816231050-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210816231050-111344 --network addons-20210816231050-111344 --ip 192.168.49.2 --volume addons-20210816231050-111344:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0816 23:12:00.890557   91500 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210816231050-111344 --name addons-20210816231050-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210816231050-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210816231050-111344 --network addons-20210816231050-111344 --ip 192.168.49.2 --volume addons-20210816231050-111344:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: (1.3673988s)
	I0816 23:12:00.900252   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Running}}
	I0816 23:12:01.374402   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:12:01.806238   91500 cli_runner.go:115] Run: docker exec addons-20210816231050-111344 stat /var/lib/dpkg/alternatives/iptables
	I0816 23:12:02.394072   91500 oci.go:278] the created container "addons-20210816231050-111344" has a running status.
	I0816 23:12:02.394072   91500 kic.go:210] Creating ssh key for kic: C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa...
	I0816 23:12:02.602330   91500 kic_runner.go:188] docker (temp): C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 23:12:03.256618   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:12:03.675731   91500 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 23:12:03.675731   91500 kic_runner.go:115] Args: [docker exec --privileged addons-20210816231050-111344 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 23:12:04.226494   91500 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa...
	I0816 23:12:04.972160   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:12:05.370004   91500 machine.go:88] provisioning docker machine ...
	I0816 23:12:05.370453   91500 ubuntu.go:169] provisioning hostname "addons-20210816231050-111344"
	I0816 23:12:05.378596   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:05.786592   91500 main.go:130] libmachine: Using SSH client type: native
	I0816 23:12:05.796759   91500 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x12295a0] 0x1229560 <nil>  [] 0s} 127.0.0.1 55004 <nil> <nil>}
	I0816 23:12:05.796759   91500 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210816231050-111344 && echo "addons-20210816231050-111344" | sudo tee /etc/hostname
	I0816 23:12:06.029373   91500 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210816231050-111344
	
	I0816 23:12:06.036745   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:06.433504   91500 main.go:130] libmachine: Using SSH client type: native
	I0816 23:12:06.433838   91500 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x12295a0] 0x1229560 <nil>  [] 0s} 127.0.0.1 55004 <nil> <nil>}
	I0816 23:12:06.433838   91500 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210816231050-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210816231050-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210816231050-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 23:12:06.619370   91500 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 23:12:06.619683   91500 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0816 23:12:06.620281   91500 ubuntu.go:177] setting up certificates
	I0816 23:12:06.620281   91500 provision.go:83] configureAuth start
	I0816 23:12:06.629357   91500 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210816231050-111344
	I0816 23:12:07.043276   91500 provision.go:138] copyHostCerts
	I0816 23:12:07.043789   91500 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0816 23:12:07.045612   91500 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0816 23:12:07.047269   91500 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0816 23:12:07.048409   91500 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-20210816231050-111344 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210816231050-111344]
	I0816 23:12:07.253921   91500 provision.go:172] copyRemoteCerts
	I0816 23:12:07.260918   91500 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 23:12:07.266918   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:07.673620   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:12:07.800744   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0816 23:12:07.844399   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 23:12:07.892594   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 23:12:07.936340   91500 provision.go:86] duration metric: configureAuth took 1.3157643s
	I0816 23:12:07.936340   91500 ubuntu.go:193] setting minikube options for container-runtime
	I0816 23:12:07.937094   91500 config.go:177] Loaded profile config "addons-20210816231050-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0816 23:12:07.943740   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:08.356176   91500 main.go:130] libmachine: Using SSH client type: native
	I0816 23:12:08.356660   91500 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x12295a0] 0x1229560 <nil>  [] 0s} 127.0.0.1 55004 <nil> <nil>}
	I0816 23:12:08.356660   91500 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 23:12:08.548042   91500 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0816 23:12:08.548042   91500 ubuntu.go:71] root file system type: overlay
	I0816 23:12:08.548533   91500 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 23:12:08.563398   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:08.985369   91500 main.go:130] libmachine: Using SSH client type: native
	I0816 23:12:08.985956   91500 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x12295a0] 0x1229560 <nil>  [] 0s} 127.0.0.1 55004 <nil> <nil>}
	I0816 23:12:08.986110   91500 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 23:12:09.196022   91500 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 23:12:09.203249   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:09.605906   91500 main.go:130] libmachine: Using SSH client type: native
	I0816 23:12:09.606369   91500 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x12295a0] 0x1229560 <nil>  [] 0s} 127.0.0.1 55004 <nil> <nil>}
	I0816 23:12:09.606369   91500 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 23:12:11.026597   91500 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-16 23:12:09.190071000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0816 23:12:11.026597   91500 machine.go:91] provisioned docker machine in 5.6563776s
	I0816 23:12:11.026837   91500 client.go:171] LocalClient.Create took 22.6653824s
	I0816 23:12:11.026837   91500 start.go:168] duration metric: libmachine.API.Create for "addons-20210816231050-111344" took 22.6655449s
	I0816 23:12:11.026837   91500 start.go:267] post-start starting for "addons-20210816231050-111344" (driver="docker")
	I0816 23:12:11.026837   91500 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 23:12:11.034307   91500 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 23:12:11.038880   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:11.456733   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:12:11.606874   91500 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 23:12:11.618641   91500 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 23:12:11.618641   91500 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 23:12:11.618641   91500 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 23:12:11.618888   91500 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0816 23:12:11.618888   91500 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0816 23:12:11.619186   91500 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0816 23:12:11.619477   91500 start.go:270] post-start completed in 592.6179ms
	I0816 23:12:11.632187   91500 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210816231050-111344
	I0816 23:12:12.049839   91500 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\config.json ...
	I0816 23:12:12.062957   91500 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 23:12:12.067636   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:12.467764   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:12:12.588620   91500 start.go:129] duration metric: createHost completed in 24.2365618s
	I0816 23:12:12.588620   91500 start.go:80] releasing machines lock for "addons-20210816231050-111344", held for 24.2372023s
	I0816 23:12:12.596747   91500 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210816231050-111344
	I0816 23:12:13.006866   91500 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 23:12:13.014709   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:13.015234   91500 ssh_runner.go:149] Run: systemctl --version
	I0816 23:12:13.021012   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:13.429206   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:12:13.437267   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:12:13.668255   91500 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0816 23:12:13.701639   91500 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0816 23:12:13.727910   91500 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0816 23:12:13.737367   91500 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 23:12:13.761132   91500 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 23:12:13.802442   91500 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0816 23:12:13.964136   91500 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0816 23:12:14.118261   91500 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0816 23:12:14.153032   91500 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 23:12:14.292947   91500 ssh_runner.go:149] Run: sudo systemctl start docker
	I0816 23:12:14.326305   91500 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0816 23:12:14.456042   91500 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0816 23:12:14.564951   91500 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	I0816 23:12:14.571745   91500 cli_runner.go:115] Run: docker exec -t addons-20210816231050-111344 dig +short host.docker.internal
	I0816 23:12:15.173815   91500 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0816 23:12:15.186380   91500 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0816 23:12:15.199993   91500 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 23:12:15.234666   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:12:15.643454   91500 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0816 23:12:15.650027   91500 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 23:12:15.723253   91500 docker.go:535] Got preloaded images: 
	I0816 23:12:15.723253   91500 docker.go:541] k8s.gcr.io/kube-apiserver:v1.21.3 wasn't preloaded
	I0816 23:12:15.735221   91500 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 23:12:15.765126   91500 ssh_runner.go:149] Run: which lz4
	I0816 23:12:15.789523   91500 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 23:12:15.802973   91500 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 23:12:15.802973   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (504826016 bytes)
	I0816 23:12:43.434152   91500 docker.go:500] Took 27.655130 seconds to copy over tarball
	I0816 23:12:43.451253   91500 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 23:12:50.376479   91500 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.9249632s)
	I0816 23:12:50.376594   91500 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 23:12:50.654540   91500 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 23:12:50.675359   91500 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3152 bytes)
	I0816 23:12:50.722775   91500 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 23:12:50.889785   91500 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0816 23:12:56.422186   91500 ssh_runner.go:189] Completed: sudo systemctl restart docker: (5.531191s)
	I0816 23:12:56.436704   91500 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 23:12:56.531501   91500 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0816 23:12:56.531724   91500 cache_images.go:74] Images are preloaded, skipping loading
	I0816 23:12:56.541713   91500 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0816 23:12:56.759629   91500 cni.go:93] Creating CNI manager for ""
	I0816 23:12:56.759629   91500 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0816 23:12:56.759879   91500 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 23:12:56.760098   91500 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210816231050-111344 NodeName:addons-20210816231050-111344 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 23:12:56.761191   91500 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "addons-20210816231050-111344"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 23:12:56.761949   91500 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20210816231050-111344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210816231050-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 23:12:56.773402   91500 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 23:12:56.797272   91500 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 23:12:56.808968   91500 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 23:12:56.835161   91500 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0816 23:12:56.874215   91500 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 23:12:56.906902   91500 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I0816 23:12:56.955834   91500 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 23:12:56.967200   91500 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 23:12:56.993270   91500 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344 for IP: 192.168.49.2
	I0816 23:12:56.993780   91500 certs.go:183] generating minikubeCA CA: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0816 23:12:57.467056   91500 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\ca.crt ...
	I0816 23:12:57.467056   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\ca.crt: {Name:mke4d5cfb5e6f4248aa163cfb3c71b3e7dbe1ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:12:57.470052   91500 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\ca.key ...
	I0816 23:12:57.470052   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\ca.key: {Name:mk162415564f1fc31c56931ba642cf75f7b5bbff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:12:57.472062   91500 certs.go:183] generating proxyClientCA CA: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0816 23:12:57.683936   91500 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0816 23:12:57.683936   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk31451cf4e597b8d26c022b75509bc40fb9761d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:12:57.686994   91500 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key ...
	I0816 23:12:57.686994   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkbe5ce726bcbaeb3a1512b599eb17b5257243a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:12:57.689772   91500 certs.go:297] generating minikube-user signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.key
	I0816 23:12:57.689926   91500 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt with IP's: []
	I0816 23:12:57.907733   91500 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt ...
	I0816 23:12:57.907733   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: {Name:mkf0ce067be8027efc4a1daca6a672916273bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:12:57.910353   91500 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.key ...
	I0816 23:12:57.910488   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.key: {Name:mk814774bf4498ca9b57942d945626a42ef8cc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:12:57.912632   91500 certs.go:297] generating minikube signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.key.dd3b5fb2
	I0816 23:12:57.912632   91500 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0816 23:12:58.247180   91500 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.crt.dd3b5fb2 ...
	I0816 23:12:58.247180   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.crt.dd3b5fb2: {Name:mk2ad0ee728c84d567ddd1a3e110b1de5a6d1817 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:12:58.249399   91500 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.key.dd3b5fb2 ...
	I0816 23:12:58.249399   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.key.dd3b5fb2: {Name:mkc67ce45e8ae9d109ea2030ec5bb03066d5cd3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:12:58.250378   91500 certs.go:308] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.crt
	I0816 23:12:58.257382   91500 certs.go:312] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.key.dd3b5fb2 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.key
	I0816 23:12:58.266856   91500 certs.go:297] generating aggregator signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\proxy-client.key
	I0816 23:12:58.266856   91500 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\proxy-client.crt with IP's: []
	I0816 23:12:58.445839   91500 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\proxy-client.crt ...
	I0816 23:12:58.445839   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\proxy-client.crt: {Name:mk282bc94bfef373bd677c25b5fb38a2fdf59a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:12:58.449256   91500 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\proxy-client.key ...
	I0816 23:12:58.449256   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\proxy-client.key: {Name:mk2542aee765680b03c15fa9acf8b91469b68c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:12:58.460095   91500 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0816 23:12:58.461098   91500 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0816 23:12:58.461254   91500 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0816 23:12:58.461666   91500 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0816 23:12:58.465340   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 23:12:58.512247   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 23:12:58.561915   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 23:12:58.605045   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 23:12:58.650109   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 23:12:58.694563   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 23:12:58.737224   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 23:12:58.786129   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 23:12:58.831552   91500 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 23:12:58.878000   91500 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 23:12:58.926468   91500 ssh_runner.go:149] Run: openssl version
	I0816 23:12:58.954304   91500 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 23:12:58.983741   91500 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 23:12:59.000087   91500 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0816 23:12:59.008635   91500 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 23:12:59.036048   91500 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 23:12:59.058790   91500 kubeadm.go:390] StartCluster: {Name:addons-20210816231050-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210816231050-111344 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 23:12:59.068739   91500 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 23:12:59.150200   91500 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 23:12:59.178366   91500 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 23:12:59.198833   91500 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0816 23:12:59.210080   91500 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 23:12:59.230749   91500 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 23:12:59.230995   91500 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 23:13:24.089387   91500 out.go:204]   - Generating certificates and keys ...
	I0816 23:13:24.094062   91500 out.go:204]   - Booting up control plane ...
	I0816 23:13:24.098359   91500 out.go:204]   - Configuring RBAC rules ...
	I0816 23:13:24.101480   91500 cni.go:93] Creating CNI manager for ""
	I0816 23:13:24.101711   91500 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0816 23:13:24.101711   91500 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 23:13:24.110618   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=addons-20210816231050-111344 minikube.k8s.io/updated_at=2021_08_16T23_13_24_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:24.113863   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:24.160001   91500 ops.go:34] apiserver oom_adj: -16
	I0816 23:13:25.131074   91500 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.0171727s)
	I0816 23:13:25.131074   91500 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=addons-20210816231050-111344 minikube.k8s.io/updated_at=2021_08_16T23_13_24_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.0198434s)
	I0816 23:13:25.140023   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:26.131557   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:26.633302   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:27.131320   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:27.631904   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:28.134233   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:28.632717   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:29.132318   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:29.632493   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:30.130945   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:30.633363   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:31.131896   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:31.632985   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:32.134627   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:32.629685   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:33.133384   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:33.635116   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:34.131398   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:34.632685   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:35.131709   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:35.632791   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:36.137616   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:37.133363   91500 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 23:13:37.385535   91500 kubeadm.go:985] duration metric: took 13.2833193s to wait for elevateKubeSystemPrivileges.
	I0816 23:13:37.385535   91500 kubeadm.go:392] StartCluster complete in 38.3252889s
	I0816 23:13:37.385762   91500 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:13:37.385992   91500 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0816 23:13:37.387181   91500 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:13:37.995377   91500 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210816231050-111344" rescaled to 1
	I0816 23:13:37.995377   91500 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 23:13:37.995377   91500 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 23:13:37.998361   91500 out.go:177] * Verifying Kubernetes components...
	I0816 23:13:37.996385   91500 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	I0816 23:13:37.996385   91500 config.go:177] Loaded profile config "addons-20210816231050-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0816 23:13:37.998361   91500 addons.go:59] Setting volumesnapshots=true in profile "addons-20210816231050-111344"
	I0816 23:13:37.998361   91500 addons.go:59] Setting metrics-server=true in profile "addons-20210816231050-111344"
	I0816 23:13:37.998361   91500 addons.go:59] Setting default-storageclass=true in profile "addons-20210816231050-111344"
	I0816 23:13:37.998361   91500 addons.go:59] Setting helm-tiller=true in profile "addons-20210816231050-111344"
	I0816 23:13:37.998361   91500 addons.go:59] Setting storage-provisioner=true in profile "addons-20210816231050-111344"
	I0816 23:13:37.998361   91500 addons.go:59] Setting olm=true in profile "addons-20210816231050-111344"
	I0816 23:13:37.998361   91500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210816231050-111344"
	I0816 23:13:37.998361   91500 addons.go:59] Setting ingress=true in profile "addons-20210816231050-111344"
	I0816 23:13:37.999403   91500 addons.go:135] Setting addon helm-tiller=true in "addons-20210816231050-111344"
	I0816 23:13:37.999403   91500 addons.go:135] Setting addon olm=true in "addons-20210816231050-111344"
	I0816 23:13:37.999403   91500 addons.go:135] Setting addon volumesnapshots=true in "addons-20210816231050-111344"
	I0816 23:13:37.999403   91500 addons.go:135] Setting addon metrics-server=true in "addons-20210816231050-111344"
	I0816 23:13:37.999403   91500 addons.go:135] Setting addon ingress=true in "addons-20210816231050-111344"
	I0816 23:13:37.998361   91500 addons.go:59] Setting registry=true in profile "addons-20210816231050-111344"
	I0816 23:13:38.000379   91500 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0816 23:13:37.999403   91500 addons.go:135] Setting addon registry=true in "addons-20210816231050-111344"
	I0816 23:13:38.001375   91500 host.go:66] Checking if "addons-20210816231050-111344" exists ...
	I0816 23:13:37.999403   91500 addons.go:135] Setting addon storage-provisioner=true in "addons-20210816231050-111344"
	W0816 23:13:38.001375   91500 addons.go:147] addon storage-provisioner should already be in state true
	I0816 23:13:37.998361   91500 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210816231050-111344"
	I0816 23:13:38.001375   91500 host.go:66] Checking if "addons-20210816231050-111344" exists ...
	I0816 23:13:37.999403   91500 host.go:66] Checking if "addons-20210816231050-111344" exists ...
	I0816 23:13:37.999403   91500 host.go:66] Checking if "addons-20210816231050-111344" exists ...
	I0816 23:13:37.999403   91500 host.go:66] Checking if "addons-20210816231050-111344" exists ...
	I0816 23:13:37.999403   91500 host.go:66] Checking if "addons-20210816231050-111344" exists ...
	I0816 23:13:38.001375   91500 host.go:66] Checking if "addons-20210816231050-111344" exists ...
	I0816 23:13:38.001375   91500 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210816231050-111344"
	I0816 23:13:38.001375   91500 host.go:66] Checking if "addons-20210816231050-111344" exists ...
	I0816 23:13:38.011371   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:13:38.014370   91500 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 23:13:38.040990   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:13:38.041295   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:13:38.042649   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:13:38.043043   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:13:38.043453   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:13:38.046163   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:13:38.047397   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:13:38.048459   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:13:38.697308   91500 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0816 23:13:38.699309   91500 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0816 23:13:38.705301   91500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 23:13:38.705301   91500 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 23:13:38.705301   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 23:13:38.707320   91500 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0816 23:13:38.709322   91500 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0816 23:13:38.713307   91500 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0816 23:13:38.713307   91500 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0816 23:13:38.714305   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0816 23:13:38.717314   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:38.723357   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:38.731386   91500 out.go:177]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	I0816 23:13:38.731386   91500 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0816 23:13:38.731386   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	I0816 23:13:38.737306   91500 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0816 23:13:38.743309   91500 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0816 23:13:38.740307   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:38.745306   91500 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0816 23:13:38.746303   91500 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0816 23:13:38.747318   91500 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0816 23:13:38.747318   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0816 23:13:38.749330   91500 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0816 23:13:38.751310   91500 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0816 23:13:38.752318   91500 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0816 23:13:38.750330   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:38.751310   91500 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 23:13:38.752318   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 23:13:38.754349   91500 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0816 23:13:38.757309   91500 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0816 23:13:38.755304   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:38.758310   91500 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0816 23:13:38.760318   91500 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0816 23:13:38.758310   91500 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 23:13:38.760318   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:38.760318   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 23:13:38.760318   91500 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 23:13:38.760318   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 23:13:38.769521   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:38.771318   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:38.794325   91500 addons.go:135] Setting addon default-storageclass=true in "addons-20210816231050-111344"
	W0816 23:13:38.794325   91500 addons.go:147] addon default-storageclass should already be in state true
	I0816 23:13:38.794325   91500 host.go:66] Checking if "addons-20210816231050-111344" exists ...
	I0816 23:13:38.806379   91500 cli_runner.go:115] Run: docker container inspect addons-20210816231050-111344 --format={{.State.Status}}
	I0816 23:13:39.222588   91500 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (1.2079202s)
	I0816 23:13:39.222588   91500 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.226913s)
	I0816 23:13:39.223073   91500 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 23:13:39.238186   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:39.336487   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:13:39.368569   91500 out.go:177] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                      │
	│    Registry addon with docker driver uses port 55002 please use that instead of default port 5000    │
	│                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 23:13:39.370435   91500 out.go:177] * For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
	I0816 23:13:39.372342   91500 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0816 23:13:39.374350   91500 out.go:177]   - Using image registry:2.7.1
	I0816 23:13:39.374739   91500 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 23:13:39.374893   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0816 23:13:39.382483   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:39.384605   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:13:39.387119   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:13:39.404643   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:13:39.407110   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:13:39.411119   91500 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 23:13:39.411119   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 23:13:39.416122   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:13:39.417122   91500 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210816231050-111344
	I0816 23:13:39.423121   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:13:39.782997   91500 node_ready.go:35] waiting up to 6m0s for node "addons-20210816231050-111344" to be "Ready" ...
	I0816 23:13:39.884770   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:13:39.936039   91500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55004 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\addons-20210816231050-111344\id_rsa Username:docker}
	I0816 23:13:39.941541   91500 node_ready.go:49] node "addons-20210816231050-111344" has status "Ready":"True"
	I0816 23:13:39.941665   91500 node_ready.go:38] duration metric: took 158.6617ms waiting for node "addons-20210816231050-111344" to be "Ready" ...
	I0816 23:13:39.941665   91500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 23:13:40.009255   91500 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-krff4" in "kube-system" namespace to be "Ready" ...
	I0816 23:13:40.644754   91500 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0816 23:13:40.644999   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0816 23:13:40.713006   91500 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 23:13:40.713006   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 23:13:40.890203   91500 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 23:13:40.890203   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0816 23:13:40.998635   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 23:13:41.000743   91500 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0816 23:13:41.001876   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0816 23:13:41.135605   91500 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 23:13:41.135605   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 23:13:41.188691   91500 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 23:13:41.188691   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 23:13:41.194124   91500 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 23:13:41.194124   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 23:13:41.211935   91500 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0816 23:13:41.212237   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0816 23:13:41.300592   91500 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0816 23:13:41.300737   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0816 23:13:41.399686   91500 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0816 23:13:41.399686   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0816 23:13:41.441469   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0816 23:13:41.510141   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 23:13:42.110228   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:13:42.131721   91500 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 23:13:42.131721   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 23:13:42.206707   91500 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 23:13:42.206707   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0816 23:13:42.300322   91500 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 23:13:42.300322   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0816 23:13:42.317439   91500 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 23:13:42.317439   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0816 23:13:42.331452   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0816 23:13:42.427064   91500 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 23:13:42.427064   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0816 23:13:42.807681   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 23:13:42.820987   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 23:13:42.837238   91500 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 23:13:42.837238   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0816 23:13:42.921378   91500 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 23:13:42.921378   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0816 23:13:43.309525   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 23:13:43.501903   91500 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 23:13:43.501903   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0816 23:13:43.689543   91500 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 23:13:43.689543   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0816 23:13:43.923628   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 23:13:44.137173   91500 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 23:13:44.137173   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0816 23:13:44.607200   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:13:45.013408   91500 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 23:13:45.013408   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0816 23:13:45.113124   91500 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.8898275s)
	I0816 23:13:45.113124   91500 start.go:728] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0816 23:13:45.426593   91500 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 23:13:45.426593   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0816 23:13:46.191898   91500 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 23:13:46.191898   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0816 23:13:46.536528   91500 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0816 23:13:46.536528   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0816 23:13:47.112462   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:13:47.317844   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.3187574s)
	I0816 23:13:47.317844   91500 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 23:13:47.317844   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0816 23:13:47.903410   91500 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0816 23:13:47.903518   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0816 23:13:49.034169   91500 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 23:13:49.034169   91500 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 23:13:49.125599   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:13:50.245005   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 23:13:51.815931   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:13:54.186735   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:13:56.222016   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:13:57.889141   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (16.4470479s)
	W0816 23:13:57.889141   91500 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0816 23:13:57.889371   91500 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0816 23:13:57.889662   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (15.5575091s)
	I0816 23:13:57.889662   91500 addons.go:313] Verifying addon ingress=true in "addons-20210816231050-111344"
	I0816 23:13:57.889662   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (15.0814078s)
	I0816 23:13:57.901234   91500 out.go:177] * Verifying ingress addon...
	I0816 23:13:57.889662   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (15.0681022s)
	I0816 23:13:57.889662   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (14.5795831s)
	I0816 23:13:57.901791   91500 addons.go:313] Verifying addon registry=true in "addons-20210816231050-111344"
	I0816 23:13:57.889662   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (13.9655035s)
	I0816 23:13:57.896694   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (16.3858367s)
	I0816 23:13:57.901791   91500 addons.go:313] Verifying addon metrics-server=true in "addons-20210816231050-111344"
	W0816 23:13:57.901791   91500 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0816 23:13:57.902184   91500 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0816 23:13:57.904171   91500 out.go:177] * Verifying registry addon...
	I0816 23:13:57.905898   91500 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0816 23:13:57.934279   91500 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 23:13:57.944216   91500 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 23:13:57.944216   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:13:57.993212   91500 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 23:13:57.993316   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:13:58.174160   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0816 23:13:58.269283   91500 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 23:13:58.496765   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:13:58.521668   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:13:58.621622   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:13:59.039336   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:13:59.040355   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:13:59.509260   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:13:59.525782   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:00.035831   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:00.092075   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:00.527243   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:00.527584   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:00.787663   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:14:01.026555   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:01.026555   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:01.502693   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:01.539603   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:02.029260   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:02.030637   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:02.530734   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:02.530996   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:03.028091   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:03.028768   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:03.193840   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:14:03.516161   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:03.531088   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:04.106801   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:04.116763   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:04.554027   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:04.586936   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:05.034837   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:05.086180   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:05.215762   91500 pod_ready.go:102] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"False"
	I0816 23:14:05.520381   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:05.521152   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:05.719404   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (15.4736154s)
	I0816 23:14:05.719404   91500 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210816231050-111344"
	I0816 23:14:05.721996   91500 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 23:14:05.737193   91500 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 23:14:05.819455   91500 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 23:14:05.819455   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:06.015632   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:06.031922   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:06.407743   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:06.540575   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:06.542134   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:06.923649   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:07.036002   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:07.099067   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:07.421682   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:07.517706   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:07.518848   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:07.602970   91500 pod_ready.go:92] pod "coredns-558bd4d5db-krff4" in "kube-system" namespace has status "Ready":"True"
	I0816 23:14:07.603293   91500 pod_ready.go:81] duration metric: took 27.5929893s waiting for pod "coredns-558bd4d5db-krff4" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.603875   91500 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-n6gs4" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.613463   91500 pod_ready.go:97] error getting pod "coredns-558bd4d5db-n6gs4" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-n6gs4" not found
	I0816 23:14:07.613463   91500 pod_ready.go:81] duration metric: took 9.5873ms waiting for pod "coredns-558bd4d5db-n6gs4" in "kube-system" namespace to be "Ready" ...
	E0816 23:14:07.613463   91500 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-n6gs4" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-n6gs4" not found
	I0816 23:14:07.613463   91500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210816231050-111344" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.632162   91500 pod_ready.go:92] pod "etcd-addons-20210816231050-111344" in "kube-system" namespace has status "Ready":"True"
	I0816 23:14:07.632382   91500 pod_ready.go:81] duration metric: took 18.9183ms waiting for pod "etcd-addons-20210816231050-111344" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.632382   91500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210816231050-111344" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.687755   91500 pod_ready.go:92] pod "kube-apiserver-addons-20210816231050-111344" in "kube-system" namespace has status "Ready":"True"
	I0816 23:14:07.687755   91500 pod_ready.go:81] duration metric: took 55.3709ms waiting for pod "kube-apiserver-addons-20210816231050-111344" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.687755   91500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210816231050-111344" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.720642   91500 pod_ready.go:92] pod "kube-controller-manager-addons-20210816231050-111344" in "kube-system" namespace has status "Ready":"True"
	I0816 23:14:07.720846   91500 pod_ready.go:81] duration metric: took 33.0904ms waiting for pod "kube-controller-manager-addons-20210816231050-111344" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.720846   91500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fmkdj" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.791196   91500 pod_ready.go:92] pod "kube-proxy-fmkdj" in "kube-system" namespace has status "Ready":"True"
	I0816 23:14:07.791438   91500 pod_ready.go:81] duration metric: took 70.5885ms waiting for pod "kube-proxy-fmkdj" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.791438   91500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210816231050-111344" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:07.848374   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:08.100187   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:08.101882   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:08.224620   91500 pod_ready.go:92] pod "kube-scheduler-addons-20210816231050-111344" in "kube-system" namespace has status "Ready":"True"
	I0816 23:14:08.224620   91500 pod_ready.go:81] duration metric: took 433.166ms waiting for pod "kube-scheduler-addons-20210816231050-111344" in "kube-system" namespace to be "Ready" ...
	I0816 23:14:08.224620   91500 pod_ready.go:38] duration metric: took 28.2818806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 23:14:08.224620   91500 api_server.go:50] waiting for apiserver process to appear ...
	I0816 23:14:08.235332   91500 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 23:14:08.411591   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:08.520630   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:08.541715   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:08.895874   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:08.998162   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:09.019579   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:09.339479   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:09.538070   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:09.591944   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:09.906274   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:10.005894   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:10.017757   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:10.337118   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:10.505985   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:10.514836   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:10.893198   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:10.994842   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:11.016787   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:11.336844   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:11.498638   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:11.513533   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:11.844837   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:12.030312   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:12.030598   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:12.347905   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:12.504769   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:12.507067   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:12.928855   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:13.012533   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:13.022984   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:13.392797   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:13.512837   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:13.605810   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:13.897501   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:13.996664   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:14.028702   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:14.398848   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:14.505849   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:14.517073   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:14.900546   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:14.996294   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:15.031473   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:15.389685   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:15.520516   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:15.526363   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:15.843902   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:16.009205   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:16.019076   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:16.335164   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:16.493687   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:16.513698   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:16.844274   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:17.010491   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:17.023103   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:17.339112   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:17.501923   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:17.526640   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:17.917300   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:18.010406   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:18.021669   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:18.391766   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:18.501454   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:18.593072   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:18.632952   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (20.458014s)
	I0816 23:14:18.633201   91500 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (20.3628945s)
	I0816 23:14:18.633201   91500 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.397241s)
	I0816 23:14:18.633306   91500 api_server.go:70] duration metric: took 40.6363847s to wait for apiserver process to appear ...
	I0816 23:14:18.633306   91500 api_server.go:86] waiting for apiserver healthz status ...
	I0816 23:14:18.633456   91500 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55001/healthz ...
	I0816 23:14:18.707587   91500 api_server.go:265] https://127.0.0.1:55001/healthz returned 200:
	ok
	I0816 23:14:18.717176   91500 api_server.go:139] control plane version: v1.21.3
	I0816 23:14:18.717348   91500 api_server.go:129] duration metric: took 84.0388ms to wait for apiserver health ...
	I0816 23:14:18.717539   91500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 23:14:18.747141   91500 system_pods.go:59] 18 kube-system pods found
	I0816 23:14:18.747343   91500 system_pods.go:61] "coredns-558bd4d5db-krff4" [77150e1c-a7a9-4bab-94ef-1f73332d758f] Running
	I0816 23:14:18.747343   91500 system_pods.go:61] "csi-hostpath-attacher-0" [af9c279f-f900-486f-9581-895973d7e1a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 23:14:18.747343   91500 system_pods.go:61] "csi-hostpath-provisioner-0" [6e4367ca-4300-4f54-8cc3-4b0401a2ff1b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0816 23:14:18.747545   91500 system_pods.go:61] "csi-hostpath-resizer-0" [7e19a6d0-670d-44f3-a756-62ad0f890763] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 23:14:18.747545   91500 system_pods.go:61] "csi-hostpath-snapshotter-0" [3db84625-611d-46cd-95a9-f25d816b105d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0816 23:14:18.747545   91500 system_pods.go:61] "csi-hostpathplugin-0" [84df90e5-5a9f-4c03-b5dc-2e1243fe10f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0816 23:14:18.747545   91500 system_pods.go:61] "etcd-addons-20210816231050-111344" [08f103ce-9deb-49ed-8584-1a41e722c716] Running
	I0816 23:14:18.747545   91500 system_pods.go:61] "kube-apiserver-addons-20210816231050-111344" [4ad0734a-a07e-4d6b-9333-2c1cf6c73dfc] Running
	I0816 23:14:18.747545   91500 system_pods.go:61] "kube-controller-manager-addons-20210816231050-111344" [55d67a7d-a5f3-4577-a39f-de16731b4ce6] Running
	I0816 23:14:18.747545   91500 system_pods.go:61] "kube-proxy-fmkdj" [2217ff0c-07b8-43bf-8d52-57f3c0e71976] Running
	I0816 23:14:18.747545   91500 system_pods.go:61] "kube-scheduler-addons-20210816231050-111344" [a57a8310-821f-42bd-a2ea-3476d7be26d1] Running
	I0816 23:14:18.747545   91500 system_pods.go:61] "metrics-server-77c99ccb96-glflj" [a5ac0575-8d04-4cc3-83b9-cc0986a8d8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 23:14:18.747545   91500 system_pods.go:61] "registry-cthtr" [a3b6cbf4-099c-41de-9488-1cd1dfae4d47] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 23:14:18.747769   91500 system_pods.go:61] "registry-proxy-4tqpl" [9e9df820-eede-4eb7-b43e-f6015c45c25e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 23:14:18.747769   91500 system_pods.go:61] "snapshot-controller-989f9ddc8-82xhk" [6c4ad4a8-f6cc-4370-8def-b2292c1a2c0e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 23:14:18.747769   91500 system_pods.go:61] "snapshot-controller-989f9ddc8-bg7qb" [3f3a3ee6-1fbb-4e87-9558-0039941db72b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 23:14:18.747769   91500 system_pods.go:61] "storage-provisioner" [08660445-63d9-454b-92c2-497c1e301531] Running
	I0816 23:14:18.747769   91500 system_pods.go:61] "tiller-deploy-768d69497-8dxp6" [0e80b784-328e-43b9-a861-307406fea486] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 23:14:18.747769   91500 system_pods.go:74] duration metric: took 30.098ms to wait for pod list to return data ...
	I0816 23:14:18.747971   91500 default_sa.go:34] waiting for default service account to be created ...
	I0816 23:14:18.794457   91500 default_sa.go:45] found service account: "default"
	I0816 23:14:18.794565   91500 default_sa.go:55] duration metric: took 46.5924ms for default service account to be created ...
	I0816 23:14:18.794565   91500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 23:14:18.830336   91500 system_pods.go:86] 18 kube-system pods found
	I0816 23:14:18.830336   91500 system_pods.go:89] "coredns-558bd4d5db-krff4" [77150e1c-a7a9-4bab-94ef-1f73332d758f] Running
	I0816 23:14:18.830494   91500 system_pods.go:89] "csi-hostpath-attacher-0" [af9c279f-f900-486f-9581-895973d7e1a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 23:14:18.830494   91500 system_pods.go:89] "csi-hostpath-provisioner-0" [6e4367ca-4300-4f54-8cc3-4b0401a2ff1b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0816 23:14:18.830494   91500 system_pods.go:89] "csi-hostpath-resizer-0" [7e19a6d0-670d-44f3-a756-62ad0f890763] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 23:14:18.830494   91500 system_pods.go:89] "csi-hostpath-snapshotter-0" [3db84625-611d-46cd-95a9-f25d816b105d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0816 23:14:18.830661   91500 system_pods.go:89] "csi-hostpathplugin-0" [84df90e5-5a9f-4c03-b5dc-2e1243fe10f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0816 23:14:18.830661   91500 system_pods.go:89] "etcd-addons-20210816231050-111344" [08f103ce-9deb-49ed-8584-1a41e722c716] Running
	I0816 23:14:18.830661   91500 system_pods.go:89] "kube-apiserver-addons-20210816231050-111344" [4ad0734a-a07e-4d6b-9333-2c1cf6c73dfc] Running
	I0816 23:14:18.830732   91500 system_pods.go:89] "kube-controller-manager-addons-20210816231050-111344" [55d67a7d-a5f3-4577-a39f-de16731b4ce6] Running
	I0816 23:14:18.830732   91500 system_pods.go:89] "kube-proxy-fmkdj" [2217ff0c-07b8-43bf-8d52-57f3c0e71976] Running
	I0816 23:14:18.830732   91500 system_pods.go:89] "kube-scheduler-addons-20210816231050-111344" [a57a8310-821f-42bd-a2ea-3476d7be26d1] Running
	I0816 23:14:18.830732   91500 system_pods.go:89] "metrics-server-77c99ccb96-glflj" [a5ac0575-8d04-4cc3-83b9-cc0986a8d8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 23:14:18.830849   91500 system_pods.go:89] "registry-cthtr" [a3b6cbf4-099c-41de-9488-1cd1dfae4d47] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 23:14:18.830849   91500 system_pods.go:89] "registry-proxy-4tqpl" [9e9df820-eede-4eb7-b43e-f6015c45c25e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 23:14:18.830849   91500 system_pods.go:89] "snapshot-controller-989f9ddc8-82xhk" [6c4ad4a8-f6cc-4370-8def-b2292c1a2c0e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 23:14:18.830978   91500 system_pods.go:89] "snapshot-controller-989f9ddc8-bg7qb" [3f3a3ee6-1fbb-4e87-9558-0039941db72b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 23:14:18.830978   91500 system_pods.go:89] "storage-provisioner" [08660445-63d9-454b-92c2-497c1e301531] Running
	I0816 23:14:18.830978   91500 system_pods.go:89] "tiller-deploy-768d69497-8dxp6" [0e80b784-328e-43b9-a861-307406fea486] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 23:14:18.830978   91500 system_pods.go:126] duration metric: took 36.4111ms to wait for k8s-apps to be running ...
	I0816 23:14:18.831117   91500 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 23:14:18.844462   91500 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 23:14:18.853469   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:18.991027   91500 system_svc.go:56] duration metric: took 160.0437ms WaitForService to wait for kubelet.
	I0816 23:14:18.991199   91500 kubeadm.go:547] duration metric: took 40.9942643s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 23:14:18.991365   91500 node_conditions.go:102] verifying NodePressure condition ...
	I0816 23:14:19.013481   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:19.023784   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:19.025219   91500 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0816 23:14:19.025374   91500 node_conditions.go:123] node cpu capacity is 4
	I0816 23:14:19.025493   91500 node_conditions.go:105] duration metric: took 33.9803ms to run NodePressure ...
	I0816 23:14:19.025493   91500 start.go:231] waiting for startup goroutines ...
	I0816 23:14:19.335161   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:19.455554   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:19.507291   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:19.844506   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:19.992558   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:20.011450   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:20.335255   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:20.491055   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:20.507652   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:21.011063   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:21.011771   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:21.016314   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:21.346496   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:21.456809   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:21.507542   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:21.839491   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:21.956582   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:22.007863   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:22.338552   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:22.455996   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:22.515604   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:22.842610   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:22.962030   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:23.010500   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:23.353964   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:23.465420   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:23.513959   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:23.899579   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:23.957721   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:24.026567   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:24.333361   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:24.498964   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:24.516158   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:24.838959   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:24.994541   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:25.013870   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:25.337665   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:25.494389   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:25.526418   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:25.849164   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:25.995786   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:26.007249   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:26.345647   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:26.493642   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:26.514160   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:26.843862   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:26.957694   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:27.012345   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:27.337800   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:27.472477   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:27.508889   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:27.896298   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:27.968811   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:28.020377   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:28.360403   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:28.460652   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:28.521195   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:28.919271   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:28.996886   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:29.014202   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:29.354100   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:29.465568   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:29.516478   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:29.842813   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:29.963792   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:30.019908   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:30.342277   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:30.465467   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:30.511349   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:30.848742   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:30.966063   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:31.143390   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:31.344665   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:31.459752   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:31.509838   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:31.838352   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:31.998343   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:32.017847   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:32.337435   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:32.494670   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:32.516223   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:32.891523   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:33.004031   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:33.029987   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:33.339737   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:33.500611   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:33.515245   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:33.835369   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:33.995078   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:34.009950   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:34.335075   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:34.500229   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:34.516440   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:34.841982   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:34.994217   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:35.015829   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:35.347387   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:35.500334   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:35.516320   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:35.842825   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:35.998497   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:36.015332   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:36.340452   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:36.495247   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:36.517485   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:36.835626   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:36.994973   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:37.019044   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:37.336113   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:37.497166   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:37.514876   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:37.844493   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:38.001101   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:38.029929   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:38.336923   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:38.498587   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:38.509903   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:38.836033   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:38.999445   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:39.005265   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:39.342400   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:39.493621   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:39.508820   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:39.836337   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:39.992611   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:40.014406   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:40.337939   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:40.458649   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:40.507657   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:40.840970   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:40.993620   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:41.009168   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:41.338308   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:41.495982   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:41.510675   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:41.833643   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:41.993081   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:42.009639   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:42.335154   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:42.491211   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:42.509323   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:42.842006   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:42.999891   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:43.017614   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:43.341975   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:43.491721   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:43.513393   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:43.848528   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:43.956007   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:44.005083   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:44.335413   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:44.459396   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:44.511534   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:44.840688   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:44.957680   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:45.009350   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:45.339868   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:45.465490   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:45.518106   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:45.842382   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:45.956780   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:46.007226   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:46.336431   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:46.458338   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:46.508557   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 23:14:46.865834   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:46.962764   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:47.009300   91500 kapi.go:108] duration metric: took 49.0731565s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 23:14:47.360060   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:47.467628   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:47.843026   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:47.959327   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:48.335570   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:48.492887   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:48.838011   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:48.956071   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:49.338209   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:49.505396   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:49.835690   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:49.961783   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:50.340026   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:50.456124   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:50.843251   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:50.993223   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:51.432859   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:51.462976   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:51.840190   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:51.962500   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:52.343789   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:52.459795   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:52.840983   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:52.959158   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:53.340641   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:53.493241   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:53.848782   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:53.957514   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:54.339614   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:54.458545   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:54.836822   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:54.993871   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:55.345904   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:55.506603   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:55.853104   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:56.002889   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:56.347680   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:56.464270   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:56.840588   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:56.955150   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:57.338722   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:57.457140   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:57.852936   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:57.971115   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:58.361084   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:58.467564   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:58.846763   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:58.965225   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:59.338959   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:59.466197   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:14:59.847836   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:14:59.962076   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:00.342344   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:00.459365   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:00.845036   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:00.969995   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:01.339179   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:01.634442   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:01.837167   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:01.969486   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:02.350467   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:02.458511   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:02.840836   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:02.993714   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:03.337640   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:03.495110   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:03.849094   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:03.957440   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:04.335017   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:04.494182   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:04.837928   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:04.995136   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:05.338088   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:05.492510   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:05.846099   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:05.991968   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:06.347807   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:06.496429   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:06.839373   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:07.000080   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:07.336795   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:07.489748   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:07.841141   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:08.001839   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:08.341094   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:08.500155   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:08.837191   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:09.145553   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:09.336118   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:09.456960   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:09.836618   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:09.960363   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:10.342364   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:10.461218   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:10.842989   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:10.959557   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:11.336894   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:11.505945   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:11.841111   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:11.995363   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:12.342760   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:12.494532   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:12.915409   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:13.018330   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:13.390820   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:13.512197   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:13.837259   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:14.011451   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:14.344253   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:14.497173   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:14.835596   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:15.009197   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:15.343676   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:15.496115   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:15.844725   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:16.001788   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:16.343416   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:16.501567   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:16.898071   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:17.000535   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:17.339055   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:17.495975   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:17.839774   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:18.002292   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:18.335324   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:18.495066   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:18.838643   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:18.990627   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:19.337106   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:19.460817   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:19.839948   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:19.999648   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:20.337693   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:20.500682   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:20.906378   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:21.036080   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:21.338666   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:21.525604   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:21.848493   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:22.004551   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:22.339165   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:22.609143   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:23.002479   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:23.004376   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:23.350667   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:23.502157   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:23.844076   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:23.994964   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:24.336298   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:24.506148   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:24.844792   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:24.999808   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:25.339104   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:25.494858   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:25.843186   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:25.998439   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:26.347469   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:26.496401   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:26.838326   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:26.998061   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:27.339142   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:27.494979   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:27.838919   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:27.996756   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:28.411734   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:28.500651   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:28.840374   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:28.996513   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:29.335065   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:29.458649   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:29.846552   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:29.961564   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:30.358209   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:30.466080   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:30.831388   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:30.994899   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:31.338692   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:31.493750   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:31.837179   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:32.019493   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:32.340187   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:32.495895   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:32.847813   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:33.012155   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:33.345904   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:33.461382   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:33.838795   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:33.996676   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:34.340160   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:34.498546   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:34.887263   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:35.027690   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:35.351949   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:35.469913   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:35.853874   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:35.964114   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:36.351024   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:36.495391   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:36.842421   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:37.000190   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:37.336811   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:37.493559   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:37.842372   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:37.995220   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:38.337928   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:38.499442   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:38.839636   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:38.958193   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:39.338981   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:39.459352   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:39.835547   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:39.964343   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:40.341239   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:40.474059   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:40.857778   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:41.015320   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:41.340771   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:41.504047   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:41.838073   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:42.000495   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:42.336390   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:42.495001   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:42.843852   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:42.996098   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:43.343228   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:43.496194   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:43.836732   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:43.995755   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:44.343137   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:44.465020   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:44.835275   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:44.958008   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:45.338702   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:45.465014   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:45.858634   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:45.959335   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:46.349838   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:46.494344   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:46.849521   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:46.998292   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:47.334809   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:47.492050   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:47.837883   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:47.996766   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:48.338536   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:48.495991   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:48.840017   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:48.995113   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:49.340478   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:49.459581   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:49.846500   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:49.960166   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:50.341713   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:50.458052   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:50.848425   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:50.961478   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:51.341973   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:51.469375   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:51.855193   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:51.972169   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:52.345085   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:52.493304   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:52.905696   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:52.966688   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:53.342990   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:53.466602   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:53.846022   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:54.003292   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:54.446941   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:54.528368   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:54.867997   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:54.972084   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:55.348759   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:55.463744   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:55.848387   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:55.994851   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:56.354169   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:56.475637   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:56.844015   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:56.987945   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:57.348465   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:57.497232   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:57.844998   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:57.998082   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:58.340693   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:58.495674   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:58.846061   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:58.997880   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:59.338904   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:59.498514   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:15:59.848576   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:15:59.996500   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:00.338913   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:00.516819   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:00.851525   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:00.996444   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:01.339826   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:01.497627   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:01.843658   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:01.996152   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:02.435320   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:02.496423   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:02.839518   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:02.999490   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:03.340925   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:03.497780   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:03.845605   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:03.996684   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:04.339278   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:04.496708   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:04.839825   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:04.997009   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:05.344192   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:05.500266   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:05.838792   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:05.998152   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:06.339452   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:06.497697   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:06.927164   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:07.002454   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:07.343293   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:07.499643   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:07.842510   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:08.004225   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:08.345288   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:08.496565   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:08.850536   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:09.002928   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:09.338868   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:09.505344   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:09.851379   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:09.960897   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:10.344023   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:10.496621   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:10.850597   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:10.998494   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:11.336333   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:11.459746   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:11.852392   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:11.994832   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:12.337902   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:12.537240   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:12.842507   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:12.996760   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:13.396419   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:13.499752   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:13.915107   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:13.997884   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:14.341655   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:14.505589   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:14.843577   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:15.007497   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:15.348777   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:15.501753   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:15.843816   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:15.998384   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:16.351829   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:16.496572   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:16.842079   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:16.999606   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:17.344427   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:17.497734   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:17.850699   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:18.000133   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:18.344457   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:18.496334   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:18.847124   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:18.998147   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:19.342559   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:19.497706   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:19.905108   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:19.960232   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:20.488350   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:20.492098   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:20.850918   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:20.959760   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:21.347331   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:21.501489   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:21.888914   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:22.013163   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:22.423675   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:22.500589   91500 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 23:16:22.939136   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:23.007520   91500 kapi.go:108] duration metric: took 2m25.0961087s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 23:16:23.414683   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:23.872301   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:24.413116   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:24.862942   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:25.401643   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:25.840063   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:26.342134   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:26.901315   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:27.342155   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:27.925071   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:28.348581   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:28.849761   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:29.339222   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:29.906053   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:30.409396   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:30.846800   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:31.345890   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:31.840556   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:32.341402   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:33.002541   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:33.343084   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:33.847260   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:34.342879   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:35.004428   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:35.344434   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:35.847733   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:36.347169   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:36.848500   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:37.346975   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:37.840385   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:38.349792   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:38.845521   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:39.341504   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:39.940794   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:40.340938   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:40.848048   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:41.347974   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:41.862239   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:42.343473   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:42.844792   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:43.345971   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:43.850112   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:44.344086   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:44.844117   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:45.338962   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:45.845969   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:46.348279   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:46.847504   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:47.340243   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:47.841321   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:48.344133   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:48.844452   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:49.345898   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:49.842433   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:50.340615   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:50.846814   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:51.340580   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:51.841192   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:52.436263   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:52.858789   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:53.357346   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:53.849218   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:54.347099   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:54.914284   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:55.350213   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:55.845697   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:56.342395   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:56.844583   91500 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 23:16:57.348658   91500 kapi.go:108] duration metric: took 2m51.6049431s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 23:16:57.352413   91500 out.go:177] * Enabled addons: storage-provisioner, helm-tiller, metrics-server, default-storageclass, olm, volumesnapshots, registry, ingress, csi-hostpath-driver
	I0816 23:16:57.352594   91500 addons.go:344] enableAddons completed in 3m19.3486335s
	I0816 23:16:58.037386   91500 start.go:462] kubectl: 1.20.0, cluster: 1.21.3 (minor skew: 1)
	I0816 23:16:58.040390   91500 out.go:177] * Done! kubectl is now configured to use "addons-20210816231050-111344" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2021-08-16 23:12:01 UTC, end at Mon 2021-08-16 23:17:31 UTC. --
	Aug 16 23:14:27 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:14:27.308415000Z" level=warning msg="reference for unknown type: " digest="sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da" remote="gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da"
	Aug 16 23:14:44 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:14:44.838704000Z" level=warning msg="reference for unknown type: " digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" remote="docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"
	Aug 16 23:14:50 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:14:50.366764800Z" level=warning msg="reference for unknown type: " digest="sha256:6003775d503546087266eda39418d221f9afb5ccfe35f637c32a1161619a3f9c" remote="gcr.io/kubernetes-helm/tiller@sha256:6003775d503546087266eda39418d221f9afb5ccfe35f637c32a1161619a3f9c"
	Aug 16 23:14:50 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:14:50.612047500Z" level=info msg="ignoring event" container=d2d8505121c63de405ae2f1463ca5acc77986e4e30e058352cbaee0417a77c34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 23:14:50 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:14:50.906530400Z" level=info msg="ignoring event" container=2d1261517a915d7f0f928c76b9eb563ab1dd540288aaeae603bd283aa35902bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 23:14:51 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:14:51.298717800Z" level=info msg="ignoring event" container=a00b94e28bc776d180cc77ded6963b89096b5479b059695c95b947e3ce50158d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 23:14:51 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:14:51.403308200Z" level=info msg="ignoring event" container=ae7e106500e9085ef2f26b1e6a4b33e4e408901ba10ffbbaadbba0739a5d40ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 23:14:56 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:14:56.689831700Z" level=warning msg="reference for unknown type: " digest="sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607" remote="quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607"
	Aug 16 23:15:08 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:15:08.921266800Z" level=warning msg="reference for unknown type: " digest="sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4" remote="k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4"
	Aug 16 23:15:20 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:15:20.132073300Z" level=warning msg="reference for unknown type: " digest="sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02" remote="k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02"
	Aug 16 23:15:28 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:15:28.564890900Z" level=warning msg="reference for unknown type: " digest="sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09" remote="k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09"
	Aug 16 23:15:33 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:15:33.688351900Z" level=warning msg="reference for unknown type: " digest="sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2" remote="k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2"
	Aug 16 23:15:38 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:15:38.852030400Z" level=warning msg="reference for unknown type: " digest="sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a" remote="k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a"
	Aug 16 23:15:44 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:15:44.288629300Z" level=warning msg="reference for unknown type: " digest="sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782" remote="k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782"
	Aug 16 23:15:49 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:15:49.592744000Z" level=warning msg="reference for unknown type: " digest="sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a" remote="k8s.gcr.io/ingress-nginx/controller@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a"
	Aug 16 23:16:22 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:16:22.101196300Z" level=warning msg="reference for unknown type: " digest="sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0" remote="quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0"
	Aug 16 23:16:23 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:16:23.032413800Z" level=warning msg="Error persisting manifest" digest="sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:5aaf812b69ea33e8900a49335843a6689937e8354b0e1157dec5174f7d1c5374, expected sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0: failed precondition" remote="quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0"
	Aug 16 23:16:39 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:16:39.607949700Z" level=warning msg="reference for unknown type: " digest="sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16" remote="k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16"
	Aug 16 23:16:44 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:16:44.752397800Z" level=warning msg="reference for unknown type: " digest="sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108" remote="k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108"
	Aug 16 23:16:47 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:16:47.279583000Z" level=warning msg="reference for unknown type: " digest="sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659" remote="k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659"
	Aug 16 23:16:50 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:16:50.976653600Z" level=warning msg="reference for unknown type: " digest="sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994" remote="k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994"
	Aug 16 23:17:10 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:17:10.214231700Z" level=info msg="ignoring event" container=281770e64cd409f3d0135ffdb66d959fc5d880338d7e71ae00b671f5f0621c92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 23:17:10 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:17:10.918298100Z" level=info msg="ignoring event" container=1baa871b551340209ac423e7d9299bc4abbd34b662a221ff2f0ba64425c0c821 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 23:17:23 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:17:23.398675700Z" level=info msg="ignoring event" container=cc3b85f1fc2ff88d41bcb597a6aeb0546addde759acbe3a13821f005cb2c5453 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 23:17:25 addons-20210816231050-111344 dockerd[772]: time="2021-08-16T23:17:25.396542800Z" level=info msg="ignoring event" container=d70fd470d478e5e375e0251447ffeb89318eb26fea9cb7dd7c59554788c7abde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                   CREATED              STATE               NAME                                     ATTEMPT             POD ID
	cb05deebc11f0       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1                                                         24 seconds ago       Running             busybox                                  0                   0cde25f21156d
	c232014f5dafe       k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994                            38 seconds ago       Running             liveness-probe                           0                   b241d23ac64cf
	f656924568e12       k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659                           42 seconds ago       Running             hostpath                                 0                   b241d23ac64cf
	7340b6e33e395       k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108                46 seconds ago       Running             node-driver-registrar                    0                   b241d23ac64cf
	a84d6fc4d461e       k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16   48 seconds ago       Running             csi-external-health-monitor-controller   0                   b241d23ac64cf
	84df63b92413e       quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0         53 seconds ago       Running             registry-server                          0                   adb9549cc4b36
	9df9b5abaf25d       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                  About a minute ago   Running             packageserver                            0                   751a4fcb6d563
	62c3aeb341a2f       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                  About a minute ago   Running             packageserver                            0                   dc73c38a0e65e
	d000e706b4d7a       k8s.gcr.io/ingress-nginx/controller@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a                             About a minute ago   Running             controller                               0                   c79161139c48d
	9d51c9694b804       k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782                          About a minute ago   Running             csi-snapshotter                          0                   7d7e736a94e73
	71375cf35d9b1       k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a                              About a minute ago   Running             csi-resizer                              0                   574422bde6342
	9e4bbcd69ba1a       k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2                          About a minute ago   Running             csi-provisioner                          0                   c5f1a0de0154f
	e5b40e6f4b0e3       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                      About a minute ago   Running             volume-snapshot-controller               0                   a9038f33cd0e2
	ceefb04f876db       k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09                             About a minute ago   Running             csi-attacher                             0                   ecb951c2547b2
	cbb1b09d3f676       k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02        2 minutes ago        Running             csi-external-health-monitor-agent        0                   b241d23ac64cf
	c44b6ff351375       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                  2 minutes ago        Running             catalog-operator                         0                   a13ed3f63eef1
	0db27205ec00f       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                      2 minutes ago        Running             volume-snapshot-controller               0                   9da9a37ca861d
	e345fd3514529       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                  2 minutes ago        Running             olm-operator                             0                   1ccaa41439dd1
	104924ec9b9c9       gcr.io/kubernetes-helm/tiller@sha256:6003775d503546087266eda39418d221f9afb5ccfe35f637c32a1161619a3f9c                                   2 minutes ago        Running             tiller                                   0                   400d0b6db202f
	2d1261517a915       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7                                    2 minutes ago        Exited              patch                                    0                   ae7e106500e90
	d2d8505121c63       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7                                    2 minutes ago        Exited              create                                   0                   a00b94e28bc77
	f439591f2b684       gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da                    2 minutes ago        Running             registry-proxy                           0                   2c4158105df96
	76470aaf4e30a       registry@sha256:d5459fcb27aecc752520df4b492b08358a1912fcdfa454f7d2101d4b09991daa                                                        3 minutes ago        Running             registry                                 0                   e5638eede2d14
	df584c58c6a36       6e38f40d628db                                                                                                                           3 minutes ago        Running             storage-provisioner                      0                   cd288fe9fd13a
	4147d62aea690       296a6d5035e2d                                                                                                                           3 minutes ago        Running             coredns                                  0                   dc5062a06a940
	a4570d039fc78       adb2816ea823a                                                                                                                           3 minutes ago        Running             kube-proxy                               0                   a7675aebe706e
	3ad8110d65d70       bc2bb319a7038                                                                                                                           4 minutes ago        Running             kube-controller-manager                  0                   b8ef3b68aa109
	f75ad2afb10b0       3d174f00aa39e                                                                                                                           4 minutes ago        Running             kube-apiserver                           0                   dc05122cd3bdb
	13bb5f090fda1       0369cf4303ffd                                                                                                                           4 minutes ago        Running             etcd                                     0                   68fa253390883
	85b968d286a4e       6be0dc1302e30                                                                                                                           4 minutes ago        Running             kube-scheduler                           0                   47886a01abe90
	
	* 
	* ==> coredns [4147d62aea69] <==
	* I0816 23:14:04.295107       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 23:13:43.288) (total time: 21006ms):
	Trace[2019727887]: [21.0065638s] [21.0065638s] END
	E0816 23:14:04.295167       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0816 23:14:04.323183       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 23:13:43.287) (total time: 21035ms):
	Trace[939984059]: [21.0352699s] [21.0352699s] END
	E0816 23:14:04.323231       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0816 23:14:04.325219       1 trace.go:205] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 23:13:43.288) (total time: 21036ms):
	Trace[1474941318]: [21.036677s] [21.036677s] END
	E0816 23:14:04.325241       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210816231050-111344
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20210816231050-111344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=addons-20210816231050-111344
	                    minikube.k8s.io/updated_at=2021_08_16T23_13_24_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210816231050-111344
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210816231050-111344"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 23:13:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210816231050-111344
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 23:17:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Aug 2021 23:17:31 +0000   Mon, 16 Aug 2021 23:13:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Aug 2021 23:17:31 +0000   Mon, 16 Aug 2021 23:13:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Aug 2021 23:17:31 +0000   Mon, 16 Aug 2021 23:13:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Aug 2021 23:17:31 +0000   Mon, 16 Aug 2021 23:13:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210816231050-111344
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                d471d656-8f5e-4011-b70a-b118b21096d9
	  Boot ID:                    59d49a8b-044c-440e-a1d3-94e728b56235
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  ingress-nginx               ingress-nginx-controller-59b45fb494-hls5q               100m (2%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         3m46s
	  kube-system                 coredns-558bd4d5db-krff4                                100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m58s
	  kube-system                 csi-hostpath-attacher-0                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 csi-hostpath-provisioner-0                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 csi-hostpath-resizer-0                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 csi-hostpath-snapshotter-0                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 csi-hostpathplugin-0                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-addons-20210816231050-111344                       100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 helm-test                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-apiserver-addons-20210816231050-111344             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-addons-20210816231050-111344    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-proxy-fmkdj                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-scheduler-addons-20210816231050-111344             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 registry-cthtr                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 registry-proxy-4tqpl                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 snapshot-controller-989f9ddc8-82xhk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 snapshot-controller-989f9ddc8-bg7qb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 tiller-deploy-768d69497-8dxp6                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  olm                         catalog-operator-75d496484d-tm95b                       10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (0%!)(MISSING)        0 (0%!)(MISSING)         3m38s
	  olm                         olm-operator-859c88c96-s6g8z                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m38s
	  olm                         operatorhubio-catalog-ql8cg                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         2m13s
	  olm                         packageserver-6db76c9f-6lv6w                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  olm                         packageserver-6db76c9f-x47pk                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                880m (22%!)(MISSING)  0 (0%!)(MISSING)
	  memory             550Mi (2%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m27s (x6 over 4m28s)  kubelet     Node addons-20210816231050-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x6 over 4m28s)  kubelet     Node addons-20210816231050-111344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x5 over 4m28s)  kubelet     Node addons-20210816231050-111344 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m10s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s                  kubelet     Node addons-20210816231050-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s                  kubelet     Node addons-20210816231050-111344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s                  kubelet     Node addons-20210816231050-111344 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4m10s                  kubelet     Node addons-20210816231050-111344 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m                     kubelet     Node addons-20210816231050-111344 status is now: NodeReady
	  Normal  Starting                 3m53s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000032]  ? hrtimer_init+0xde/0xde
	[  +0.000002]  hrtimer_wakeup+0x1e/0x21
	[  +0.000022]  __hrtimer_run_queues+0x117/0x1c4
	[  +0.000010]  ? ktime_get_update_offsets_now+0x36/0x95
	[  +0.000003]  hrtimer_interrupt+0x92/0x165
	[  +0.000044]  hv_stimer0_isr+0x20/0x2d
	[  +0.000053]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000021]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000002]  </IRQ>
	[  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	[  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 6e ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 6e 31 ff e8
	[  +0.000001] RSP: 0018:ffffb51d800a3ec8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	[  +0.000002] RAX: ffffffff91918b30 RBX: 0000000000000001 RCX: ffffffff92253150
	[  +0.000001] RDX: 0000000000171622 RSI: 0000000000000001 RDI: 0000000000000001
	[  +0.000001] RBP: 0000000000000000 R08: 0000007cfc1104b2 R09: 0000000000000002
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: ffff8d162e19ef80 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002]  ? __sched_text_end+0x1/0x1
	[  +0.000021]  ? native_safe_halt+0x5/0x8
	[  +0.000002]  default_idle+0x1b/0x2c
	[  +0.000003]  do_idle+0xe5/0x216
	[  +0.000003]  cpu_startup_entry+0x6f/0x71
	[  +0.000019]  start_secondary+0x18e/0x1a9
	[  +0.000032]  secondary_startup_64+0xa4/0xb0
	[  +0.000020] ---[ end trace b7d34331c4afdfb9 ]---
	
	* 
	* ==> etcd [13bb5f090fda] <==
	* 2021-08-16 23:14:41.485928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:14:51.488053 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:15:01.473156 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:15:09.137298 W | etcdserver: read-only range request "key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" " with result "range_response_count:3 size:13552" took too long (185.9227ms) to execute
	2021-08-16 23:15:11.490702 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:15:21.487762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:15:22.936408 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" " with result "range_response_count:1 size:3017" took too long (107.9788ms) to execute
	2021-08-16 23:15:31.487656 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:15:41.489571 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:15:51.489308 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:16:01.490429 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:16:11.489952 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:16:20.470963 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (219.9683ms) to execute
	2021-08-16 23:16:20.471157 W | etcdserver: read-only range request "key:\"/registry/flowschemas/exempt\" " with result "range_response_count:1 size:879" took too long (206.6613ms) to execute
	2021-08-16 23:16:20.471574 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:18 size:83586" took too long (139.6076ms) to execute
	2021-08-16 23:16:20.472505 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:10084" took too long (253.0287ms) to execute
	2021-08-16 23:16:21.505992 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:16:31.494572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:16:39.920579 W | etcdserver: read-only range request "key:\"/registry/pods/ingress-nginx/ingress-nginx-controller-59b45fb494-hls5q\" " with result "range_response_count:1 size:6009" took too long (283.878ms) to execute
	2021-08-16 23:16:41.508324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:16:51.468811 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:17:01.473832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:17:11.515622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:17:21.493295 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 23:17:31.496512 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  23:17:35 up 13 min,  0 users,  load average: 2.24, 4.06, 2.05
	Linux addons-20210816231050-111344 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [f75ad2afb10b] <==
	* I0816 23:15:17.309811       1 client.go:360] parsed scheme: "passthrough"
	I0816 23:15:17.310105       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 23:15:17.310121       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0816 23:15:19.414677       1 handler_proxy.go:102] no RequestInfo found in the context
	E0816 23:15:19.415013       1 controller.go:116] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 23:15:19.415035       1 controller.go:129] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
	I0816 23:16:00.421460       1 client.go:360] parsed scheme: "passthrough"
	I0816 23:16:00.421569       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 23:16:00.421806       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0816 23:16:19.420168       1 handler_proxy.go:102] no RequestInfo found in the context
	E0816 23:16:19.420451       1 controller.go:116] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 23:16:19.420465       1 controller.go:129] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.
	I0816 23:16:38.122617       1 client.go:360] parsed scheme: "passthrough"
	I0816 23:16:38.122862       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 23:16:38.122910       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 23:17:10.986826       1 client.go:360] parsed scheme: "passthrough"
	I0816 23:17:10.986874       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 23:17:10.986885       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 23:17:12.010785       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	I0816 23:17:12.022340       1 trace.go:205] Trace[837411702]: "Create" url:/apis/networking.k8s.io/v1/namespaces/default/ingresses,user-agent:kubectl/v1.20.0 (windows/amd64) kubernetes/af46c47,client:192.168.49.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 23:17:11.501) (total time: 515ms):
	Trace[837411702]: ---"Object stored in database" 513ms (23:17:00.021)
	Trace[837411702]: [515.9187ms] [515.9187ms] END
	I0816 23:17:22.528813       1 controller.go:611] quota admission added evaluator for: subscriptions.operators.coreos.com
	
	* 
	* ==> kube-controller-manager [3ad8110d65d7] <==
	* E0816 23:14:06.520363       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0816 23:14:06.520843       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
	I0816 23:14:06.520902       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
	I0816 23:14:06.520928       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com
	I0816 23:14:06.520950       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for volumesnapshots.snapshot.storage.k8s.io
	I0816 23:14:06.520971       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com
	I0816 23:14:06.521017       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
	I0816 23:14:06.521098       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	W0816 23:14:06.916083       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 23:14:07.001051       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0816 23:14:07.108375       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0816 23:14:07.111119       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0816 23:14:09.623275       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 23:14:10.813178       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 23:14:50.933533       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0816 23:14:51.136448       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0816 23:15:12.705582       1 event.go:291] "Event occurred" object="olm/packageserver" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set packageserver-6db76c9f to 2"
	I0816 23:15:12.724237       1 event.go:291] "Event occurred" object="olm/packageserver-6db76c9f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: packageserver-6db76c9f-6lv6w"
	I0816 23:15:12.807403       1 event.go:291] "Event occurred" object="olm/packageserver-6db76c9f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: packageserver-6db76c9f-x47pk"
	E0816 23:15:23.106963       1 memcache.go:196] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
	E0816 23:15:23.116519       1 memcache.go:101] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
	E0816 23:15:39.788495       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request
	W0816 23:15:41.191610       1 garbagecollector.go:703] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]
	E0816 23:16:09.896748       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request
	W0816 23:16:11.310315       1 garbagecollector.go:703] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [a4570d039fc7] <==
	* I0816 23:13:41.788937       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0816 23:13:41.789286       1 server_others.go:140] Detected node IP 192.168.49.2
	W0816 23:13:41.789325       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0816 23:13:42.137881       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0816 23:13:42.137934       1 server_others.go:212] Using iptables Proxier.
	I0816 23:13:42.137949       1 server_others.go:219] creating dualStackProxier for iptables.
	W0816 23:13:42.137965       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0816 23:13:42.138891       1 server.go:643] Version: v1.21.3
	I0816 23:13:42.182953       1 config.go:315] Starting service config controller
	I0816 23:13:42.182983       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 23:13:42.183017       1 config.go:224] Starting endpoint slice config controller
	I0816 23:13:42.183021       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 23:13:42.189783       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 23:13:42.199868       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 23:13:42.283272       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 23:13:42.283374       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [85b968d286a4] <==
	* E0816 23:13:19.829787       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 23:13:19.830272       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 23:13:19.830369       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 23:13:19.830471       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 23:13:19.830553       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 23:13:19.831513       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 23:13:19.884790       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 23:13:19.885562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 23:13:19.890789       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 23:13:19.899379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 23:13:19.899836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 23:13:19.900578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 23:13:19.902474       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 23:13:19.903976       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 23:13:20.783078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 23:13:20.793809       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 23:13:20.849063       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 23:13:20.884950       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 23:13:20.903715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 23:13:20.945561       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 23:13:21.026033       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 23:13:21.085541       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 23:13:21.121811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 23:13:21.127570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0816 23:13:23.124832       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 23:12:01 UTC, end at Mon 2021-08-16 23:17:37 UTC. --
	Aug 16 23:17:17 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:17.198788    2647 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/registry-test through plugin: invalid network status for"
	Aug 16 23:17:17 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:17.298599    2647 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8c5b70c3fc25cd4553930371a4e3bfafc3d063c9adc1a558ca8af3651499f46f"
	Aug 16 23:17:17 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:17.299798    2647 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/nginx through plugin: invalid network status for"
	Aug 16 23:17:17 addons-20210816231050-111344 kubelet[2647]: E0816 23:17:17.800426    2647 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/157c6762-9e6f-439a-8539-eacc9f514d13/etc-hosts with error exit status 1" pod="default/nginx"
	Aug 16 23:17:18 addons-20210816231050-111344 kubelet[2647]: E0816 23:17:18.238461    2647 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/a5ac0575-8d04-4cc3-83b9-cc0986a8d8e4/etc-hosts with error exit status 1" pod="kube-system/metrics-server-77c99ccb96-glflj"
	Aug 16 23:17:18 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:18.518467    2647 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/nginx through plugin: invalid network status for"
	Aug 16 23:17:18 addons-20210816231050-111344 kubelet[2647]: E0816 23:17:18.604990    2647 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/0c620427-122f-4254-80fd-80b0a6c7e6fc/etc-hosts with error exit status 1" pod="default/registry-test"
	Aug 16 23:17:21 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:21.429472    2647 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/registry-test through plugin: invalid network status for"
	Aug 16 23:17:23 addons-20210816231050-111344 kubelet[2647]: E0816 23:17:23.931747    2647 remote_runtime.go:394] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="84df63b92413ef4120d4435ce4159de0fb6c738ce2359c81848efbb0271ac7f0" cmd=[grpc_health_probe -addr=:50051]
	Aug 16 23:17:24 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:24.616756    2647 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/registry-test through plugin: invalid network status for"
	Aug 16 23:17:24 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:24.653219    2647 scope.go:111] "RemoveContainer" containerID="cc3b85f1fc2ff88d41bcb597a6aeb0546addde759acbe3a13821f005cb2c5453"
	Aug 16 23:17:25 addons-20210816231050-111344 kubelet[2647]: E0816 23:17:25.144337    2647 kubelet.go:1701] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[kube-api-access-55hpb], unattached volumes=[kube-api-access-55hpb]: timed out waiting for the condition" pod="olm/operatorhubio-catalog-6sbbz"
	Aug 16 23:17:25 addons-20210816231050-111344 kubelet[2647]: E0816 23:17:25.144412    2647 pod_workers.go:190] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-55hpb], unattached volumes=[kube-api-access-55hpb]: timed out waiting for the condition" pod="olm/operatorhubio-catalog-6sbbz" podUID=2375bb37-88bf-43e3-a2a8-71b6f5f5f867
	Aug 16 23:17:26 addons-20210816231050-111344 kubelet[2647]: E0816 23:17:26.321542    2647 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/3904ff4946abd262bf286d03c912f05d1a4f543db6b8fffc9d80b91d104c9fd5/diff" to get inode usage: stat /var/lib/docker/overlay2/3904ff4946abd262bf286d03c912f05d1a4f543db6b8fffc9d80b91d104c9fd5/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/281770e64cd409f3d0135ffdb66d959fc5d880338d7e71ae00b671f5f0621c92" to get inode usage: stat /var/lib/docker/containers/281770e64cd409f3d0135ffdb66d959fc5d880338d7e71ae00b671f5f0621c92: no such file or directory
	Aug 16 23:17:27 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:27.543723    2647 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbw9w\" (UniqueName: \"kubernetes.io/projected/0c620427-122f-4254-80fd-80b0a6c7e6fc-kube-api-access-mbw9w\") pod \"0c620427-122f-4254-80fd-80b0a6c7e6fc\" (UID: \"0c620427-122f-4254-80fd-80b0a6c7e6fc\") "
	Aug 16 23:17:27 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:27.691874    2647 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c620427-122f-4254-80fd-80b0a6c7e6fc-kube-api-access-mbw9w" (OuterVolumeSpecName: "kube-api-access-mbw9w") pod "0c620427-122f-4254-80fd-80b0a6c7e6fc" (UID: "0c620427-122f-4254-80fd-80b0a6c7e6fc"). InnerVolumeSpecName "kube-api-access-mbw9w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 23:17:27 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:27.701760    2647 reconciler.go:319] "Volume detached for volume \"kube-api-access-mbw9w\" (UniqueName: \"kubernetes.io/projected/0c620427-122f-4254-80fd-80b0a6c7e6fc-kube-api-access-mbw9w\") on node \"addons-20210816231050-111344\" DevicePath \"\""
	Aug 16 23:17:27 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:27.924604    2647 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/0c620427-122f-4254-80fd-80b0a6c7e6fc/volumes"
	Aug 16 23:17:28 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:28.120451    2647 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d70fd470d478e5e375e0251447ffeb89318eb26fea9cb7dd7c59554788c7abde"
	Aug 16 23:17:28 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:28.902542    2647 scope.go:111] "RemoveContainer" containerID="cc3b85f1fc2ff88d41bcb597a6aeb0546addde759acbe3a13821f005cb2c5453"
	Aug 16 23:17:30 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:30.743838    2647 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 23:17:30 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:30.902416    2647 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b87sj\" (UniqueName: \"kubernetes.io/projected/6884477a-6796-4c3a-a5d9-d70fc571b047-kube-api-access-b87sj\") pod \"helm-test\" (UID: \"6884477a-6796-4c3a-a5d9-d70fc571b047\") "
	Aug 16 23:17:33 addons-20210816231050-111344 kubelet[2647]: E0816 23:17:33.827849    2647 remote_runtime.go:394] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="84df63b92413ef4120d4435ce4159de0fb6c738ce2359c81848efbb0271ac7f0" cmd=[grpc_health_probe -addr=:50051]
	Aug 16 23:17:36 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:36.094551    2647 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f468f3e127df7886bf4f16f2938f4fd7b63b19e1131039ce85b4998a1b36226e"
	Aug 16 23:17:36 addons-20210816231050-111344 kubelet[2647]: I0816 23:17:36.100396    2647 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/helm-test through plugin: invalid network status for"
	
	* 
	* ==> storage-provisioner [df584c58c6a3] <==
	* I0816 23:13:55.489277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 23:13:56.333291       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 23:13:56.333362       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 23:13:57.785984       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 23:13:57.786214       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210816231050-111344_32acd661-9e66-4b08-bdc9-eb8bd9889192!
	I0816 23:13:57.787399       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a62c6f25-50d1-476d-afb8-3dcf2e023e9b", APIVersion:"v1", ResourceVersion:"759", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210816231050-111344_32acd661-9e66-4b08-bdc9-eb8bd9889192 became leader
	I0816 23:13:57.990383       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210816231050-111344_32acd661-9e66-4b08-bdc9-eb8bd9889192!
	

                                                
                                                
-- /stdout --

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-20210816231050-111344 -n addons-20210816231050-111344

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-20210816231050-111344 -n addons-20210816231050-111344: (5.0010491s)
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210816231050-111344 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: nginx ingress-nginx-admission-create-qpljr ingress-nginx-admission-patch-g24hs helm-test
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/GCPAuth]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210816231050-111344 describe pod nginx ingress-nginx-admission-create-qpljr ingress-nginx-admission-patch-g24hs helm-test
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210816231050-111344 describe pod nginx ingress-nginx-admission-create-qpljr ingress-nginx-admission-patch-g24hs helm-test: exit status 1 (313.9634ms)

                                                
                                                
-- stdout --
	Name:         nginx
	Namespace:    default
	Priority:     0
	Node:         addons-20210816231050-111344/192.168.49.2
	Start Time:   Mon, 16 Aug 2021 23:17:12 +0000
	Labels:       run=nginx
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d4wj7 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-d4wj7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  32s   default-scheduler  Successfully assigned default/nginx to addons-20210816231050-111344
	  Normal  Pulling    27s   kubelet            Pulling image "nginx:alpine"
	  Normal  Pulled     3s    kubelet            Successfully pulled image "nginx:alpine" in 24.1860279s
	  Normal  Created    2s    kubelet            Created container nginx
	  Normal  Started    0s    kubelet            Started container nginx

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qpljr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-g24hs" not found
	Error from server (NotFound): pods "helm-test" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210816231050-111344 describe pod nginx ingress-nginx-admission-create-qpljr ingress-nginx-admission-patch-g24hs helm-test: exit status 1
--- FAIL: TestAddons/parallel/GCPAuth (43.67s)

                                                
                                    
x
+
TestCertOptions (220.39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20210817001948-111344 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-20210817001948-111344 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (2m54.1667176s)
cert_options_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20210817001948-111344 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20210817001948-111344 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (4.8019328s)
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210817001948-111344 config view
cert_options_test.go:78: apiserver server port incorrect. Output of 'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters:\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Tue, 17 Aug 2021 00:22:33 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.22.0\n\t      name: cluster_info\n\t    server: https://localhost:55151\n\t  name: cert-options-20210817001948-111344\n\tcontexts:\n\t- context:\n\t    cluster: cert-options-20210817001948-111344\n\t    extensions:\n\t    - extension:\n\t        last-update: Tue, 17 Aug 2021 00:22:33 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.22.0\n\t      name: context_info\n\t    namespace: default\n\t    user: cert-options-20210817001948-111344\n\t  name: cert-options-20210817001948-111344\n\tcurrent-context: cert-options-20210817001948-111344\n\tkind: Config\n\tpreferences: {}\n\tusers:\n\t- name: c
ert-options-20210817001948-111344\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\cert-options-20210817001948-111344\\client.crt\n\t    client-key: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\cert-options-20210817001948-111344\\client.key\n\n-- /stdout --"
cert_options_test.go:81: *** TestCertOptions FAILED at 2021-08-17 00:22:47.64938 +0000 GMT m=+4424.724062501
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect cert-options-20210817001948-111344
helpers_test.go:236: (dbg) docker inspect cert-options-20210817001948-111344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c330a202b8878181d28bda30ff00e24f32a7b76952d438778599a27b31f4da0",
	        "Created": "2021-08-17T00:20:01.7840164Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159483,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T00:20:04.1240486Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/1c330a202b8878181d28bda30ff00e24f32a7b76952d438778599a27b31f4da0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c330a202b8878181d28bda30ff00e24f32a7b76952d438778599a27b31f4da0/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c330a202b8878181d28bda30ff00e24f32a7b76952d438778599a27b31f4da0/hosts",
	        "LogPath": "/var/lib/docker/containers/1c330a202b8878181d28bda30ff00e24f32a7b76952d438778599a27b31f4da0/1c330a202b8878181d28bda30ff00e24f32a7b76952d438778599a27b31f4da0-json.log",
	        "Name": "/cert-options-20210817001948-111344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "cert-options-20210817001948-111344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "cert-options-20210817001948-111344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cb6e0fc2778969ae59e6572180a3f359cd023b7485109b9a1fc31e1182498acb-init/diff:/var/lib/docker/overlay2/e167e57d4b442602b2435f5ffd2147b1da53de34df49d96ce69565867fcf3850/diff:/var/lib/docker/overlay2/dbfef15a73962254d5bcc2c91a409021fc3573c3135096621d707c6f4feaac7d/diff:/var/lib/docker/overlay2/7fc44848dc580276135d9db2b62ce047cfba1909de5e91acbe8c1a5fc8fb3649/diff:/var/lib/docker/overlay2/493996ff2d6a75ef70db2749dded6936397fe536c32e28dda979b8af93e19f13/diff:/var/lib/docker/overlay2/b862553905dec6f42a41351a012fdce386251d97160f74f6b1feb3b455e1f53a/diff:/var/lib/docker/overlay2/517a8b2830d9e81ff950c8305063a6681219abbb7b22f3a87587fa819a0728ed/diff:/var/lib/docker/overlay2/f2b268080cfd9bbb64731ea6b7cb2ec64077e6c2701c2ab6e8b358a541056c5d/diff:/var/lib/docker/overlay2/ee5e612696333c681900cad605a1f678e9114e9c7ecf70717fad21aea1e52992/diff:/var/lib/docker/overlay2/6f44289af0b09a02645c237aabeff61487c57040b9531c0f7bd97517308bfd57/diff:/var/lib/docker/overlay2/f98f67
21a411bacf9d310d4d4405fbd528fa90d60af5ffabda9d55cef9ef3033/diff:/var/lib/docker/overlay2/8bc2e0f6b7c2aeccc6a944f316dbac5672f8685cc5dd5d3c2fc4bd370db4949f/diff:/var/lib/docker/overlay2/ef9e793c1e243004ff088f210369994837eb19a8abd21cf93f75257155445f16/diff:/var/lib/docker/overlay2/48fa7f37fc37f8220a31f4294bc800ef7a33c53c10bdc23d7dc68f27cfe4e535/diff:/var/lib/docker/overlay2/54bc5e0e0c32fdc66ce3eeb345721201a63a0c878d4665607246cd4aa5af61e5/diff:/var/lib/docker/overlay2/398c3fc63254fcc564086ced0eb7211f2d474f8bbdcd43ee27fd609e767c44a6/diff:/var/lib/docker/overlay2/796acb5b93384da004a8065a332cbb07c952569bdd7bb5e551b218e4c5c61f73/diff:/var/lib/docker/overlay2/d90baef87ad95bdfb14a2f35e4cb62336e18c21eb934266f43bfbe017252b857/diff:/var/lib/docker/overlay2/c16752decc8ef06fce4eebdf4ff4725414f3aa80cccd7b3ffdc325095930c0b4/diff:/var/lib/docker/overlay2/a679084eec181b0e1408e573d1ac08c47af1fd8266eb5884bf1a38d5ba0ddbbc/diff:/var/lib/docker/overlay2/15becb79b0d40211562ae33ddc5ec776276b9ae42c8a9f4645dcc6442b36f771/diff:/var/lib/d
ocker/overlay2/068a9a5dce1094eb72788237bd9cda4c76345774d5e647f0af81302a75861f4a/diff:/var/lib/docker/overlay2/74b9e9d807e09734ee96c76bc67adc56c9e3286b39315f89f6747c8c917ad2e5/diff:/var/lib/docker/overlay2/75de8e4895a0b4efe563705c06184db384b5c40154856b9bca2106a8d59fc151/diff:/var/lib/docker/overlay2/cbca3c40b21fee2ef276744168492f17203934aca8de4b459edae2fa55b6bb02/diff:/var/lib/docker/overlay2/584d28a6308bb998bd89d7d92c45b57b9dd66de472d166972d2f5195afd9dd44/diff:/var/lib/docker/overlay2/9c722118749c036eb2d00ba5a6925c5f32b121d64974c99e2de552b26a8bb7cd/diff:/var/lib/docker/overlay2/24908c792743f57c182587c66263f074ed86ae7c5812c631dea82d8ec6650e81/diff:/var/lib/docker/overlay2/9a8de59bfb816b3fc2f0fd522ef966196534483b5e87aafd180dd8b07e9c3582/diff:/var/lib/docker/overlay2/df46d170084213da519dea7e0f402d51272dc10df4d7cd7f37c528c411ac7000/diff:/var/lib/docker/overlay2/36b86a6f515e5882426e598755bb77d43cc340fd20798dfd0a810cd2ab96eeb6/diff:/var/lib/docker/overlay2/b54ac02f70047359cd143a32f862d18498cb556877ccfd252defb9d17fc
9d9f5/diff:/var/lib/docker/overlay2/971b77d080920997e1d0d0936f312a9a322ccd6ab9920c83a8eb5d14b93c3849/diff:/var/lib/docker/overlay2/5b5c21ae360c7e0738c0048bc3fe8d7d3cc0640d266660121f3968f675f42063/diff:/var/lib/docker/overlay2/e07bf2561a99ba47435b8f84b267268e02e9e4ff47832bd5054ee28bb1ca5001/diff:/var/lib/docker/overlay2/0c560be48f01814af21ec54fc79ea5e8db28f05e967a17b331be28ad61c75483/diff:/var/lib/docker/overlay2/27930667f3fd0fd38c13a39c0590c03a2c3b3ba04f0a3c946167be6a40f50c46/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb6e0fc2778969ae59e6572180a3f359cd023b7485109b9a1fc31e1182498acb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb6e0fc2778969ae59e6572180a3f359cd023b7485109b9a1fc31e1182498acb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb6e0fc2778969ae59e6572180a3f359cd023b7485109b9a1fc31e1182498acb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "cert-options-20210817001948-111344",
	                "Source": "/var/lib/docker/volumes/cert-options-20210817001948-111344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "cert-options-20210817001948-111344",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8555/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "cert-options-20210817001948-111344",
	                "name.minikube.sigs.k8s.io": "cert-options-20210817001948-111344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a92c8fd0b12566a126cd83c1f915f70e35cf41e66c88017bc1084e3cd63869e3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55155"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55153"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55150"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55152"
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55151"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a92c8fd0b125",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "cert-options-20210817001948-111344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1c330a202b88",
	                        "cert-options-20210817001948-111344"
	                    ],
	                    "NetworkID": "2666f807c1dcc894c86873842926aab610164b4410cb2098931f1473feba17a2",
	                    "EndpointID": "0d8b049c5cc5a34159407bc7c95c4690719380e2289b415fb8c8d20dbcc1dfa1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20210817001948-111344 -n cert-options-20210817001948-111344
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20210817001948-111344 -n cert-options-20210817001948-111344: (4.7223921s)
helpers_test.go:245: <<< TestCertOptions FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestCertOptions]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20210817001948-111344 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20210817001948-111344 logs -n 25: (10.0871182s)
helpers_test.go:253: TestCertOptions logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                       | kubernetes-upgrade-20210817001119-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:15:43 GMT | Tue, 17 Aug 2021 00:15:56 GMT |
	|         | kubernetes-upgrade-20210817001119-111344 |                                          |                         |         |                               |                               |
	| start   | -p                                       | offline-docker-20210817001119-111344     | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:11:19 GMT | Tue, 17 Aug 2021 00:15:59 GMT |
	|         | offline-docker-20210817001119-111344     |                                          |                         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048     |                                          |                         |         |                               |                               |
	|         | --wait=true --driver=docker              |                                          |                         |         |                               |                               |
	| delete  | -p                                       | offline-docker-20210817001119-111344     | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:16:00 GMT | Tue, 17 Aug 2021 00:16:18 GMT |
	|         | offline-docker-20210817001119-111344     |                                          |                         |         |                               |                               |
	| start   | -p                                       | stopped-upgrade-20210817001119-111344    | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:16:22 GMT | Tue, 17 Aug 2021 00:18:14 GMT |
	|         | stopped-upgrade-20210817001119-111344    |                                          |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr -v=1     |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	| delete  | -p                                       | stopped-upgrade-20210817001119-111344    | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:18:54 GMT | Tue, 17 Aug 2021 00:19:12 GMT |
	|         | stopped-upgrade-20210817001119-111344    |                                          |                         |         |                               |                               |
	| start   | -p                                       | docker-flags-20210817001618-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:16:18 GMT | Tue, 17 Aug 2021 00:19:22 GMT |
	|         | docker-flags-20210817001618-111344       |                                          |                         |         |                               |                               |
	|         | --cache-images=false --memory=2048       |                                          |                         |         |                               |                               |
	|         | --install-addons=false                   |                                          |                         |         |                               |                               |
	|         | --wait=false --docker-env=FOO=BAR        |                                          |                         |         |                               |                               |
	|         | --docker-env=BAZ=BAT                     |                                          |                         |         |                               |                               |
	|         | --docker-opt=debug                       |                                          |                         |         |                               |                               |
	|         | --docker-opt=icc=true                    |                                          |                         |         |                               |                               |
	|         | --alsologtostderr -v=5                   |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	| -p      | docker-flags-20210817001618-111344       | docker-flags-20210817001618-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:19:22 GMT | Tue, 17 Aug 2021 00:19:26 GMT |
	|         | ssh sudo systemctl show docker           |                                          |                         |         |                               |                               |
	|         | --property=Environment --no-pager        |                                          |                         |         |                               |                               |
	| -p      | docker-flags-20210817001618-111344       | docker-flags-20210817001618-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:19:27 GMT | Tue, 17 Aug 2021 00:19:30 GMT |
	|         | ssh sudo systemctl show docker           |                                          |                         |         |                               |                               |
	|         | --property=ExecStart --no-pager          |                                          |                         |         |                               |                               |
	| delete  | -p                                       | docker-flags-20210817001618-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:19:31 GMT | Tue, 17 Aug 2021 00:19:48 GMT |
	|         | docker-flags-20210817001618-111344       |                                          |                         |         |                               |                               |
	| start   | -p pause-20210817001556-111344           | pause-20210817001556-111344              | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:15:56 GMT | Tue, 17 Aug 2021 00:20:04 GMT |
	|         | --memory=2048                            |                                          |                         |         |                               |                               |
	|         | --install-addons=false                   |                                          |                         |         |                               |                               |
	|         | --wait=all --driver=docker               |                                          |                         |         |                               |                               |
	| start   | -p                                       | running-upgrade-20210817001515-111344    | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:18:33 GMT | Tue, 17 Aug 2021 00:20:48 GMT |
	|         | running-upgrade-20210817001515-111344    |                                          |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr -v=1     |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	| start   | -p pause-20210817001556-111344           | pause-20210817001556-111344              | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:20:04 GMT | Tue, 17 Aug 2021 00:20:53 GMT |
	|         | --alsologtostderr -v=1                   |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	| pause   | -p pause-20210817001556-111344           | pause-20210817001556-111344              | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:20:53 GMT | Tue, 17 Aug 2021 00:20:59 GMT |
	|         | --alsologtostderr -v=5                   |                                          |                         |         |                               |                               |
	| unpause | -p pause-20210817001556-111344           | pause-20210817001556-111344              | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:21:03 GMT | Tue, 17 Aug 2021 00:21:08 GMT |
	|         | --alsologtostderr -v=5                   |                                          |                         |         |                               |                               |
	| delete  | -p                                       | running-upgrade-20210817001515-111344    | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:20:49 GMT | Tue, 17 Aug 2021 00:21:11 GMT |
	|         | running-upgrade-20210817001515-111344    |                                          |                         |         |                               |                               |
	| pause   | -p pause-20210817001556-111344           | pause-20210817001556-111344              | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:21:09 GMT | Tue, 17 Aug 2021 00:21:14 GMT |
	|         | --alsologtostderr -v=5                   |                                          |                         |         |                               |                               |
	| delete  | -p pause-20210817001556-111344           | pause-20210817001556-111344              | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:21:14 GMT | Tue, 17 Aug 2021 00:21:34 GMT |
	|         | --alsologtostderr -v=5                   |                                          |                         |         |                               |                               |
	| profile | list --output json                       | minikube                                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:21:34 GMT | Tue, 17 Aug 2021 00:21:49 GMT |
	| delete  | -p pause-20210817001556-111344           | pause-20210817001556-111344              | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:21:51 GMT | Tue, 17 Aug 2021 00:21:57 GMT |
	| delete  | -p                                       | flannel-20210817002157-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:21:57 GMT | Tue, 17 Aug 2021 00:22:04 GMT |
	|         | flannel-20210817002157-111344            |                                          |                         |         |                               |                               |
	| start   | -p                                       | force-systemd-flag-20210817001912-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:19:12 GMT | Tue, 17 Aug 2021 00:22:09 GMT |
	|         | force-systemd-flag-20210817001912-111344 |                                          |                         |         |                               |                               |
	|         | --memory=2048 --force-systemd            |                                          |                         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210817001912-111344 | force-systemd-flag-20210817001912-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:22:09 GMT | Tue, 17 Aug 2021 00:22:17 GMT |
	|         | ssh docker info --format                 |                                          |                         |         |                               |                               |
	|         | {{.CgroupDriver}}                        |                                          |                         |         |                               |                               |
	| delete  | -p                                       | force-systemd-flag-20210817001912-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:22:17 GMT | Tue, 17 Aug 2021 00:22:37 GMT |
	|         | force-systemd-flag-20210817001912-111344 |                                          |                         |         |                               |                               |
	| start   | -p                                       | cert-options-20210817001948-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:19:48 GMT | Tue, 17 Aug 2021 00:22:42 GMT |
	|         | cert-options-20210817001948-111344       |                                          |                         |         |                               |                               |
	|         | --memory=2048                            |                                          |                         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                |                                          |                         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15            |                                          |                         |         |                               |                               |
	|         | --apiserver-names=localhost              |                                          |                         |         |                               |                               |
	|         | --apiserver-names=www.google.com         |                                          |                         |         |                               |                               |
	|         | --apiserver-port=8555                    |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	|         | --apiserver-name=localhost               |                                          |                         |         |                               |                               |
	| -p      | cert-options-20210817001948-111344       | cert-options-20210817001948-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:22:42 GMT | Tue, 17 Aug 2021 00:22:47 GMT |
	|         | ssh openssl x509 -text -noout -in        |                                          |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt    |                                          |                         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 00:22:38
	Running on machine: windows-server-2
	Binary: Built with gc go1.16.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 00:22:38.175738   61480 out.go:298] Setting OutFile to fd 3504 ...
	I0817 00:22:38.177734   61480 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:22:38.177734   61480 out.go:311] Setting ErrFile to fd 3488...
	I0817 00:22:38.177734   61480 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:22:38.203276   61480 out.go:305] Setting JSON to false
	I0817 00:22:38.208671   61480 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8367805,"bootTime":1620791953,"procs":148,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0817 00:22:38.208671   61480 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0817 00:22:38.361349   61480 out.go:177] * [no-preload-20210817002237-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0817 00:22:38.364727   61480 notify.go:169] Checking for updates...
	I0817 00:22:38.495538   61480 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:22:34.854814  108300 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:22:34.858825  108300 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 00:22:34.858825  108300 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:22:34.858825  108300 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 00:22:34.866814  108300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210817001948-111344
	I0817 00:22:34.870822  108300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20210817001948-111344
	I0817 00:22:35.362830  108300 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 00:22:35.362830  108300 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 00:22:35.373099  108300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210817001948-111344
	I0817 00:22:35.412760  108300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55155 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\cert-options-20210817001948-111344\id_rsa Username:docker}
	I0817 00:22:35.474507  108300 api_server.go:50] waiting for apiserver process to appear ...
	I0817 00:22:35.482964  108300 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:22:35.898777  108300 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:22:35.905597  108300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55155 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\cert-options-20210817001948-111344\id_rsa Username:docker}
	I0817 00:22:36.340061  108300 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 00:22:37.065955  108300 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.2068377s)
	I0817 00:22:37.065955  108300 start.go:728] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0817 00:22:37.066115  108300 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.5829309s)
	I0817 00:22:37.066115  108300 api_server.go:70] duration metric: took 2.841057s to wait for apiserver process to appear ...
	I0817 00:22:37.066115  108300 api_server.go:86] waiting for apiserver healthz status ...
	I0817 00:22:37.066115  108300 api_server.go:239] Checking apiserver healthz at https://localhost:55151/healthz ...
	I0817 00:22:37.424258  108300 api_server.go:265] https://localhost:55151/healthz returned 200:
	ok
	I0817 00:22:37.433386  108300 api_server.go:139] control plane version: v1.21.3
	I0817 00:22:37.433386  108300 api_server.go:129] duration metric: took 367.2564ms to wait for apiserver health ...
	I0817 00:22:37.433386  108300 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 00:22:37.489305  108300 system_pods.go:59] 4 kube-system pods found
	I0817 00:22:37.489305  108300 system_pods.go:61] "etcd-cert-options-20210817001948-111344" [a22c48b0-c5dc-4779-a50d-234eb8c14eff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0817 00:22:37.489305  108300 system_pods.go:61] "kube-apiserver-cert-options-20210817001948-111344" [33094165-c181-4291-9431-24517a1ec84e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 00:22:37.489305  108300 system_pods.go:61] "kube-controller-manager-cert-options-20210817001948-111344" [60904379-fdef-45eb-bcbe-2247e38c6afd] Pending
	I0817 00:22:37.489305  108300 system_pods.go:61] "kube-scheduler-cert-options-20210817001948-111344" [e648e51e-fe37-45b7-a3ba-389d5382f302] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0817 00:22:37.489305  108300 system_pods.go:74] duration metric: took 55.9169ms to wait for pod list to return data ...
	I0817 00:22:37.489305  108300 kubeadm.go:547] duration metric: took 3.2642303s to wait for : map[apiserver:true system_pods:true] ...
	I0817 00:22:37.489305  108300 node_conditions.go:102] verifying NodePressure condition ...
	I0817 00:22:37.644467  108300 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0817 00:22:37.644467  108300 node_conditions.go:123] node cpu capacity is 4
	I0817 00:22:37.644467  108300 node_conditions.go:105] duration metric: took 155.1565ms to run NodePressure ...
	I0817 00:22:37.644467  108300 start.go:231] waiting for startup goroutines ...
	I0817 00:22:38.646132   61480 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0817 00:22:34.563807   81396 main.go:130] libmachine: Using SSH client type: native
	I0817 00:22:34.564307   81396 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55163 <nil> <nil>}
	I0817 00:22:34.564307   81396 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:22:38.997508   61480 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 00:22:39.002258   61480 config.go:177] Loaded profile config "cert-options-20210817001948-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:22:39.003944   61480 config.go:177] Loaded profile config "missing-upgrade-20210817002111-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0817 00:22:39.005991   61480 config.go:177] Loaded profile config "old-k8s-version-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I0817 00:22:39.006431   61480 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 00:22:40.809794   61480 docker.go:132] docker version: linux-20.10.2
	I0817 00:22:40.818010   61480 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:22:41.625473   61480 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:70 SystemTime:2021-08-17 00:22:41.2613972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:22:42.397322  108300 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.498175s)
	I0817 00:22:42.397322  108300 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.0570313s)
	I0817 00:22:41.683088   61480 out.go:177] * Using the docker driver based on user configuration
	I0817 00:22:41.683744   61480 start.go:278] selected driver: docker
	I0817 00:22:41.683744   61480 start.go:751] validating driver "docker" against <nil>
	I0817 00:22:41.684118   61480 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 00:22:41.778498   61480 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:22:42.574655   61480 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-17 00:22:42.1767864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:22:42.575096   61480 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 00:22:42.575679   61480 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 00:22:42.575917   61480 cni.go:93] Creating CNI manager for ""
	I0817 00:22:42.575917   61480 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:22:42.575917   61480 start_flags.go:277] config:
	{Name:no-preload-20210817002237-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817002237-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:22:42.578612   61480 out.go:177] * Starting control plane node no-preload-20210817002237-111344 in cluster no-preload-20210817002237-111344
	I0817 00:22:42.578839   61480 cache.go:117] Beginning downloading kic base image for docker with docker
	I0817 00:22:42.400725  108300 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 00:22:42.400725  108300 addons.go:344] enableAddons completed in 8.1754638s
	I0817 00:22:42.594757  108300 start.go:462] kubectl: 1.20.0, cluster: 1.21.3 (minor skew: 1)
	I0817 00:22:42.596585  108300 out.go:177] * Done! kubectl is now configured to use "cert-options-20210817001948-111344" cluster and "default" namespace by default
	I0817 00:22:42.580322   61480 out.go:177] * Pulling base image ...
	I0817 00:22:42.580670   61480 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0817 00:22:42.580670   61480 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 00:22:42.580970   61480 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\config.json ...
	I0817 00:22:42.581229   61480 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\config.json: {Name:mk36e584cb9ebd3237a993f0154aee6edcc5c8bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:22:42.581530   61480 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.22.0-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.0-rc.0
	I0817 00:22:42.581530   61480 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0817 00:22:42.581530   61480 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5
	I0817 00:22:42.581530   61480 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd:3.4.13-3 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-3
	I0817 00:22:42.583685   61480 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.22.0-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.0-rc.0
	I0817 00:22:42.581530   61480 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.22.0-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.0-rc.0
	I0817 00:22:42.584962   61480 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.22.0-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.0-rc.0
	I0817 00:22:42.581842   61480 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns:v1.8.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.0
	I0817 00:22:42.581842   61480 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0817 00:22:42.585072   61480 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause:3.4.1 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.4.1
	I0817 00:22:42.866623   61480 cache.go:108] acquiring lock: {Name:mkc6064103b19373bab3f3f94face0f66507eed4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:42.868280   61480 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	I0817 00:22:42.868620   61480 cache.go:108] acquiring lock: {Name:mk51d9d3a31d21343bd2f32632b5941ae343eb80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:42.869171   61480 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	I0817 00:22:42.878533   61480 cache.go:108] acquiring lock: {Name:mk44f47a540d8af84fb7fb72e1af8eb84b99acdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:42.878973   61480 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.0 exists
	I0817 00:22:42.879358   61480 cache.go:97] cache image "k8s.gcr.io/coredns/coredns:v1.8.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\coredns\\coredns_v1.8.0" took 293.3169ms
	I0817 00:22:42.879471   61480 cache.go:81] save to tar file k8s.gcr.io/coredns/coredns:v1.8.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns\coredns_v1.8.0 succeeded
	I0817 00:22:42.887925   61480 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0817 00:22:42.889128   61480 cache.go:108] acquiring lock: {Name:mkfe443c64d1a3dae7531e1da24945fa4d1b684d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:42.889675   61480 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 exists
	I0817 00:22:42.889980   61480 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.1.0" took 303.8737ms
	I0817 00:22:42.890191   61480 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 succeeded
	I0817 00:22:42.891568   61480 cache.go:108] acquiring lock: {Name:mkcbba06c099fa67c03e9375ab41c3707a41a063 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:42.891957   61480 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 exists
	I0817 00:22:42.892085   61480 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.4" took 310.5434ms
	I0817 00:22:42.892085   61480 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 succeeded
	I0817 00:22:42.895580   61480 cache.go:108] acquiring lock: {Name:mk75b0f99d43d0ec256982a4fcf5a0abccfe6ca3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:42.895580   61480 cache.go:108] acquiring lock: {Name:mk2aa3f68d9467c1b77d00b1d8ad98c99cc70462 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:42.895580   61480 cache.go:108] acquiring lock: {Name:mka767400f8ce9f67aae21a5816029bd172b5c95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:42.895580   61480 cache.go:108] acquiring lock: {Name:mk5a5a669c940a6fc63188b2b3f844f9faf87771 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:42.895580   61480 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.4.1 exists
	I0817 00:22:42.895580   61480 cache.go:97] cache image "k8s.gcr.io/pause:3.4.1" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.4.1" took 305.6327ms
	I0817 00:22:42.895580   61480 cache.go:81] save to tar file k8s.gcr.io/pause:3.4.1 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.4.1 succeeded
	I0817 00:22:42.895580   61480 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-3
	I0817 00:22:42.895580   61480 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	I0817 00:22:42.895580   61480 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	I0817 00:22:42.897558   61480 cache.go:108] acquiring lock: {Name:mkbd69c89f5d4341beed10f900f1632dd59716b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:42.897558   61480 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0817 00:22:42.897558   61480 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 316.0159ms
	I0817 00:22:42.897558   61480 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0817 00:22:42.910789   61480 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0817 00:22:42.910789   61480 image.go:175] daemon lookup for k8s.gcr.io/etcd:3.4.13-3: Error response from daemon: reference does not exist
	I0817 00:22:42.925264   61480 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0: Error response from daemon: reference does not exist
	I0817 00:22:42.940244   61480 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0: Error response from daemon: reference does not exist
	W0817 00:22:43.028060   61480 image.go:185] authn lookup for k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:22:43.158621   61480 image.go:185] authn lookup for k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:22:43.277126   61480 image.go:185] authn lookup for k8s.gcr.io/etcd:3.4.13-3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0817 00:22:43.296747   61480 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 00:22:43.297017   61480 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 00:22:43.297017   61480 cache.go:205] Successfully downloaded all kic artifacts
	I0817 00:22:43.297017   61480 start.go:313] acquiring machines lock for no-preload-20210817002237-111344: {Name:mk48319881a311c6f6007616e0c417a78f12c1ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:22:43.297505   61480 start.go:317] acquired machines lock for "no-preload-20210817002237-111344" in 488.4µs
	I0817 00:22:43.297655   61480 start.go:89] Provisioning new machine with config: &{Name:no-preload-20210817002237-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:no-preload-20210817002237-111344 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0817 00:22:43.297941   61480 start.go:126] createHost starting for "" (driver="docker")
	I0817 00:22:43.485647   81396 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-17 00:22:33.971833000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0817 00:22:43.485647   81396 machine.go:91] provisioned docker machine in 14.8675147s
	I0817 00:22:43.485647   81396 client.go:171] LocalClient.Create took 34.5091321s
	I0817 00:22:43.485647   81396 start.go:168] duration metric: libmachine.API.Create for "old-k8s-version-20210817002204-111344" took 34.5094331s
	I0817 00:22:43.485647   81396 start.go:267] post-start starting for "old-k8s-version-20210817002204-111344" (driver="docker")
	I0817 00:22:43.485647   81396 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:22:43.500224   81396 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:22:43.506855   81396 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817002204-111344
	I0817 00:22:44.068794   81396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55163 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210817002204-111344\id_rsa Username:docker}
	I0817 00:22:43.300424   61480 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0817 00:22:43.300950   61480 start.go:160] libmachine.API.Create for "no-preload-20210817002237-111344" (driver="docker")
	I0817 00:22:43.301150   61480 client.go:168] LocalClient.Create starting
	I0817 00:22:43.301735   61480 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0817 00:22:43.302068   61480 main.go:130] libmachine: Decoding PEM data...
	I0817 00:22:43.302239   61480 main.go:130] libmachine: Parsing certificate...
	I0817 00:22:43.302588   61480 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0817 00:22:43.302836   61480 main.go:130] libmachine: Decoding PEM data...
	I0817 00:22:43.302836   61480 main.go:130] libmachine: Parsing certificate...
	I0817 00:22:43.311249   61480 cli_runner.go:115] Run: docker network inspect no-preload-20210817002237-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 00:22:43.390306   61480 image.go:185] authn lookup for k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:22:43.519042   61480 image.go:185] authn lookup for k8s.gcr.io/kube-proxy:v1.22.0-rc.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0817 00:22:43.843625   61480 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.0-rc.0
	W0817 00:22:43.867217   61480 cli_runner.go:162] docker network inspect no-preload-20210817002237-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 00:22:43.876707   61480 network_create.go:255] running [docker network inspect no-preload-20210817002237-111344] to gather additional debugging logs...
	I0817 00:22:43.876894   61480 cli_runner.go:115] Run: docker network inspect no-preload-20210817002237-111344
	I0817 00:22:43.948911   61480 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.0-rc.0
	I0817 00:22:44.045233   61480 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-3
	I0817 00:22:44.165252   61480 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.0-rc.0
	I0817 00:22:44.165252   61480 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.0-rc.0
	W0817 00:22:44.419911   61480 cli_runner.go:162] docker network inspect no-preload-20210817002237-111344 returned with exit code 1
	I0817 00:22:44.419911   61480 network_create.go:258] error running [docker network inspect no-preload-20210817002237-111344]: docker network inspect no-preload-20210817002237-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20210817002237-111344
	I0817 00:22:44.422526   61480 network_create.go:260] output of [docker network inspect no-preload-20210817002237-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20210817002237-111344
	
	** /stderr **
	I0817 00:22:44.459347   61480 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 00:22:45.005571   61480 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.0-rc.0 exists
	I0817 00:22:45.006583   61480 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.22.0-rc.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.22.0-rc.0" took 2.4211999s
	I0817 00:22:45.006583   61480 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.22.0-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.22.0-rc.0 succeeded
	I0817 00:22:45.076038   61480 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000ecc030] misses:0}
	I0817 00:22:45.076038   61480 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:22:45.076038   61480 network_create.go:106] attempt to create docker network no-preload-20210817002237-111344 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 00:22:45.081040   61480 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210817002237-111344
	I0817 00:22:45.196707   61480 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.0-rc.0 exists
	I0817 00:22:45.196707   61480 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.22.0-rc.0" took 2.6150781s
	I0817 00:22:45.197019   61480 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.22.0-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.22.0-rc.0 succeeded
	I0817 00:22:45.357737   61480 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.0-rc.0 exists
	I0817 00:22:45.358196   61480 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.22.0-rc.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.22.0-rc.0" took 2.7737939s
	I0817 00:22:45.358196   61480 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.22.0-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.22.0-rc.0 succeeded
	I0817 00:22:45.474192   61480 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.0-rc.0 exists
	I0817 00:22:45.475296   61480 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.22.0-rc.0" took 2.8899342s
	I0817 00:22:45.475296   61480 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.22.0-rc.0 succeeded
	W0817 00:22:45.717238   61480 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210817002237-111344 returned with exit code 1
	W0817 00:22:45.717238   61480 network_create.go:98] failed to create docker network no-preload-20210817002237-111344 192.168.49.0/24, will retry: subnet is taken
	I0817 00:22:45.729887   61480 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ecc030] amended:false}} dirty:map[] misses:0}
	I0817 00:22:45.729887   61480 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:22:45.740045   61480 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ecc030] amended:true}} dirty:map[192.168.49.0:0xc000ecc030 192.168.58.0:0xc0006421b8] misses:0}
	I0817 00:22:45.741020   61480 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:22:45.741020   61480 network_create.go:106] attempt to create docker network no-preload-20210817002237-111344 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0817 00:22:45.745992   61480 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210817002237-111344
	W0817 00:22:46.315017   61480 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210817002237-111344 returned with exit code 1
	W0817 00:22:46.315017   61480 network_create.go:98] failed to create docker network no-preload-20210817002237-111344 192.168.58.0/24, will retry: subnet is taken
	I0817 00:22:46.326144   61480 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ecc030] amended:true}} dirty:map[192.168.49.0:0xc000ecc030 192.168.58.0:0xc0006421b8] misses:1}
	I0817 00:22:46.326315   61480 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:22:46.336687   61480 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ecc030] amended:true}} dirty:map[192.168.49.0:0xc000ecc030 192.168.58.0:0xc0006421b8 192.168.67.0:0xc0003320a8] misses:1}
	I0817 00:22:46.337255   61480 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:22:46.337789   61480 network_create.go:106] attempt to create docker network no-preload-20210817002237-111344 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0817 00:22:46.351248   61480 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210817002237-111344
	I0817 00:22:47.048124   61480 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-3 exists
	I0817 00:22:47.048602   61480 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-3" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.4.13-3" took 4.466591s
	I0817 00:22:47.048677   61480 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-3 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-3 succeeded
	I0817 00:22:47.048677   61480 cache.go:88] Successfully saved all images to host disk.
	I0817 00:22:47.253605   61480 network_create.go:90] docker network no-preload-20210817002237-111344 192.168.67.0/24 created
	I0817 00:22:47.253701   61480 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-20210817002237-111344" container
	I0817 00:22:47.266018   61480 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 00:22:47.845935   61480 cli_runner.go:115] Run: docker volume create no-preload-20210817002237-111344 --label name.minikube.sigs.k8s.io=no-preload-20210817002237-111344 --label created_by.minikube.sigs.k8s.io=true
	I0817 00:22:44.250372   81396 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:22:44.280236   81396 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:22:44.280236   81396 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:22:44.280236   81396 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:22:44.280236   81396 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 00:22:44.280236   81396 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:22:44.280236   81396 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:22:44.280236   81396 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:22:44.296432   81396 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:22:44.344934   81396 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:22:44.465687   81396 start.go:270] post-start completed in 980.003ms
	I0817 00:22:44.474646   81396 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210817002204-111344
	I0817 00:22:45.107058   81396 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\config.json ...
	I0817 00:22:45.121042   81396 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:22:45.130040   81396 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817002204-111344
	I0817 00:22:45.761978   81396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55163 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210817002204-111344\id_rsa Username:docker}
	I0817 00:22:45.960402   81396 start.go:129] duration metric: createHost completed in 36.9872231s
	I0817 00:22:45.960402   81396 start.go:80] releasing machines lock for "old-k8s-version-20210817002204-111344", held for 36.9876977s
	I0817 00:22:45.966364   81396 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210817002204-111344
	I0817 00:22:46.529200   81396 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 00:22:46.536352   81396 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817002204-111344
	I0817 00:22:46.536631   81396 ssh_runner.go:149] Run: systemctl --version
	I0817 00:22:46.542751   81396 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210817002204-111344
	I0817 00:22:47.080794   81396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55163 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210817002204-111344\id_rsa Username:docker}
	I0817 00:22:47.135618   81396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55163 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210817002204-111344\id_rsa Username:docker}
	I0817 00:22:47.321875   81396 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0817 00:22:47.492766   81396 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:22:47.550491   81396 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0817 00:22:47.557736   81396 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 00:22:47.608250   81396 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 00:22:47.686170   81396 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0817 00:22:48.041795   81396 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0817 00:22:48.413914   81396 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:22:48.520410   81396 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:22:48.855252   81396 ssh_runner.go:149] Run: sudo systemctl start docker
	I0817 00:22:48.924608   81396 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:22:49.240064   81396 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:22:48.411928   61480 oci.go:102] Successfully created a docker volume no-preload-20210817002237-111344
	I0817 00:22:48.417921   61480 cli_runner.go:115] Run: docker run --rm --name no-preload-20210817002237-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210817002237-111344 --entrypoint /usr/bin/test -v no-preload-20210817002237-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 00:22:50.967666   61480 cli_runner.go:168] Completed: docker run --rm --name no-preload-20210817002237-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210817002237-111344 --entrypoint /usr/bin/test -v no-preload-20210817002237-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (2.5496483s)
	I0817 00:22:50.967768   61480 oci.go:106] Successfully prepared a docker volume no-preload-20210817002237-111344
	I0817 00:22:50.967768   61480 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0817 00:22:50.973793   61480 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:22:51.855641   61480 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:60 SystemTime:2021-08-17 00:22:51.4653298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:22:51.861942   61480 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 00:22:52.681120   61480 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20210817002237-111344 --name no-preload-20210817002237-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210817002237-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20210817002237-111344 --network no-preload-20210817002237-111344 --ip 192.168.67.2 --volume no-preload-20210817002237-111344:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2021-08-17 00:20:06 UTC, end at Tue 2021-08-17 00:23:00 UTC. --
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[466]: time="2021-08-17T00:21:28.125061200Z" level=info msg="Processing signal 'terminated'"
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[466]: time="2021-08-17T00:21:28.128775900Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[466]: time="2021-08-17T00:21:28.131154200Z" level=info msg="Daemon shutdown complete"
	Aug 17 00:21:28 cert-options-20210817001948-111344 systemd[1]: docker.service: Succeeded.
	Aug 17 00:21:28 cert-options-20210817001948-111344 systemd[1]: Stopped Docker Application Container Engine.
	Aug 17 00:21:28 cert-options-20210817001948-111344 systemd[1]: Starting Docker Application Container Engine...
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.353037400Z" level=info msg="Starting up"
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.360989700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.361029200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.361094300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.361120000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.369358600Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.369896500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.370052500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.370197600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.424116200Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.451377800Z" level=info msg="Loading containers: start."
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.812186000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 17 00:21:28 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:28.980838200Z" level=info msg="Loading containers: done."
	Aug 17 00:21:29 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:29.131194100Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
	Aug 17 00:21:29 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:29.131331900Z" level=info msg="Daemon has completed initialization"
	Aug 17 00:21:29 cert-options-20210817001948-111344 systemd[1]: Started Docker Application Container Engine.
	Aug 17 00:21:29 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:29.209554700Z" level=info msg="API listen on [::]:2376"
	Aug 17 00:21:29 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:21:29.237517000Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 17 00:22:20 cert-options-20210817001948-111344 dockerd[777]: time="2021-08-17T00:22:20.080821800Z" level=info msg="ignoring event" container=4929f4a13cb9418b024c3e86a2f0a8867b21b294f8cd1918cf0452144c1113f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	0010c0c9cd089       6e38f40d628db       12 seconds ago       Running             storage-provisioner       0                   ef52f453fb304
	ce2411a384ec7       296a6d5035e2d       12 seconds ago       Running             coredns                   0                   7741798aedb19
	c7ac30fbe5950       adb2816ea823a       14 seconds ago       Running             kube-proxy                0                   baaee86ea660f
	84f5856f4a034       bc2bb319a7038       37 seconds ago       Running             kube-controller-manager   1                   ead59519e7355
	9eee2ea01d3da       6be0dc1302e30       About a minute ago   Running             kube-scheduler            0                   ab81d366ff951
	c99c5f39646a5       3d174f00aa39e       About a minute ago   Running             kube-apiserver            0                   124e5ac9dd22e
	4929f4a13cb94       bc2bb319a7038       About a minute ago   Exited              kube-controller-manager   0                   ead59519e7355
	4b14610200a3c       0369cf4303ffd       About a minute ago   Running             etcd                      0                   bf3e2072c2fe8
	
	* 
	* ==> coredns [ce2411a384ec] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               cert-options-20210817001948-111344
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=cert-options-20210817001948-111344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=cert-options-20210817001948-111344
	                    minikube.k8s.io/updated_at=2021_08_17T00_22_29_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 00:22:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  cert-options-20210817001948-111344
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 00:22:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 00:22:44 +0000   Tue, 17 Aug 2021 00:22:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 00:22:44 +0000   Tue, 17 Aug 2021 00:22:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 00:22:44 +0000   Tue, 17 Aug 2021 00:22:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 00:22:44 +0000   Tue, 17 Aug 2021 00:22:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    cert-options-20210817001948-111344
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                60ee471b-0da0-42bc-9b89-904e3c3e802f
	  Boot ID:                    59d49a8b-044c-440e-a1d3-94e728b56235
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-nrwk5                                      100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     16s
	  kube-system                 etcd-cert-options-20210817001948-111344                       100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         27s
	  kube-system                 kube-apiserver-cert-options-20210817001948-111344             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-controller-manager-cert-options-20210817001948-111344    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-proxy-krqwh                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  kube-system                 kube-scheduler-cert-options-20210817001948-111344             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (18%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  72s (x8 over 73s)  kubelet     Node cert-options-20210817001948-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s (x8 over 73s)  kubelet     Node cert-options-20210817001948-111344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s (x7 over 73s)  kubelet     Node cert-options-20210817001948-111344 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet     Node cert-options-20210817001948-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet     Node cert-options-20210817001948-111344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet     Node cert-options-20210817001948-111344 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             28s                kubelet     Node cert-options-20210817001948-111344 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  27s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                17s                kubelet     Node cert-options-20210817001948-111344 status is now: NodeReady
	  Normal  Starting                 12s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000044]  hv_stimer0_isr+0x20/0x2d
	[  +0.000053]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000021]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000002]  </IRQ>
	[  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	[  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 6e ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 6e 31 ff e8
	[  +0.000001] RSP: 0018:ffffb51d800a3ec8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	[  +0.000002] RAX: ffffffff91918b30 RBX: 0000000000000001 RCX: ffffffff92253150
	[  +0.000001] RDX: 0000000000171622 RSI: 0000000000000001 RDI: 0000000000000001
	[  +0.000001] RBP: 0000000000000000 R08: 0000007cfc1104b2 R09: 0000000000000002
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: ffff8d162e19ef80 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002]  ? __sched_text_end+0x1/0x1
	[  +0.000021]  ? native_safe_halt+0x5/0x8
	[  +0.000002]  default_idle+0x1b/0x2c
	[  +0.000003]  do_idle+0xe5/0x216
	[  +0.000003]  cpu_startup_entry+0x6f/0x71
	[  +0.000019]  start_secondary+0x18e/0x1a9
	[  +0.000032]  secondary_startup_64+0xa4/0xb0
	[  +0.000020] ---[ end trace b7d34331c4afdfb9 ]---
	[Aug17 00:14] tee (131347): /proc/127190/oom_adj is deprecated, please use /proc/127190/oom_score_adj instead.
	[Aug17 00:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000007] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.100196] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [4b14610200a3] <==
	* 2021-08-17 00:22:18.293978 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-node-lease\" " with result "range_response_count:0 size:4" took too long (103.6711ms) to execute
	2021-08-17 00:22:18.295627 W | etcdserver: read-only range request "key:\"/registry/clusterroles/edit\" " with result "range_response_count:0 size:4" took too long (160.3186ms) to execute
	2021-08-17 00:22:18.400373 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (102.8223ms) to execute
	2021-08-17 00:22:18.449948 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:aggregate-to-view\" " with result "range_response_count:0 size:4" took too long (137.0483ms) to execute
	2021-08-17 00:22:22.255765 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 00:22:38.040860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 00:22:38.350277 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-cert-options-20210817001948-111344\" " with result "range_response_count:1 size:5272" took too long (301.8071ms) to execute
	2021-08-17 00:22:39.474340 W | wal: sync duration of 1.1003155s, expected less than 1s
	2021-08-17 00:22:39.474619 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-cert-options-20210817001948-111344\" " with result "range_response_count:1 size:5272" took too long (1.0872116s) to execute
	2021-08-17 00:22:40.139690 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (391.4447ms) to execute
	2021-08-17 00:22:40.140389 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (501.1851ms) to execute
	2021-08-17 00:22:40.141208 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-cert-options-20210817001948-111344\" " with result "range_response_count:1 size:4311" took too long (637.7386ms) to execute
	2021-08-17 00:22:41.881289 W | etcdserver: request "header:<ID:3238505196965872518 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/storageclasses/standard\" mod_revision:0 > success:<request_put:<key:\"/registry/storageclasses/standard\" value_size:936 >> failure:<>>" with result "size:16" took too long (766.2684ms) to execute
	2021-08-17 00:22:41.882112 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-cert-options-20210817001948-111344\" " with result "range_response_count:1 size:5734" took too long (1.7081082s) to execute
	2021-08-17 00:22:42.047474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 00:22:42.102539 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:1 size:721" took too long (190.1346ms) to execute
	2021-08-17 00:22:42.136088 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (222.6182ms) to execute
	2021-08-17 00:22:44.189744 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:1 size:260" took too long (109.0938ms) to execute
	2021-08-17 00:22:44.947294 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:173" took too long (150.8752ms) to execute
	2021-08-17 00:22:44.951624 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" " with result "range_response_count:1 size:299" took too long (155.2341ms) to execute
	2021-08-17 00:22:44.951884 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/cert-options-20210817001948-111344\" " with result "range_response_count:1 size:699" took too long (156.3494ms) to execute
	2021-08-17 00:22:44.982846 W | etcdserver: request "header:<ID:3238505196965872703 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:0 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:539 >> failure:<>>" with result "size:16" took too long (104.2481ms) to execute
	2021-08-17 00:22:45.000116 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:729" took too long (101.4921ms) to execute
	2021-08-17 00:22:51.129440 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-17 00:23:01.155283 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  00:23:01 up  1:18,  0 users,  load average: 22.60, 18.80, 10.93
	Linux cert-options-20210817001948-111344 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [c99c5f39646a] <==
	* I0817 00:22:28.591703       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 00:22:28.706485       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 00:22:28.894969       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 00:22:38.547831       1 client.go:360] parsed scheme: "passthrough"
	I0817 00:22:38.548130       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0817 00:22:38.548181       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0817 00:22:39.476201       1 trace.go:205] Trace[321783501]: "Get" url:/api/v1/namespaces/kube-system/pods/etcd-cert-options-20210817001948-111344,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (17-Aug-2021 00:22:38.383) (total time: 1092ms):
	Trace[321783501]: ---"About to write a response" 1092ms (00:22:00.475)
	Trace[321783501]: [1.0927933s] [1.0927933s] END
	I0817 00:22:40.142756       1 trace.go:205] Trace[1998019274]: "Get" url:/apis/storage.k8s.io/v1/storageclasses/standard,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (17-Aug-2021 00:22:39.637) (total time: 505ms):
	Trace[1998019274]: [505.3606ms] [505.3606ms] END
	I0817 00:22:40.143981       1 trace.go:205] Trace[1742778518]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-scheduler-cert-options-20210817001948-111344,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (17-Aug-2021 00:22:39.502) (total time: 641ms):
	Trace[1742778518]: ---"About to write a response" 640ms (00:22:00.143)
	Trace[1742778518]: [641.4513ms] [641.4513ms] END
	I0817 00:22:41.885289       1 trace.go:205] Trace[1618303523]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (17-Aug-2021 00:22:40.160) (total time: 1724ms):
	Trace[1618303523]: ---"Object stored in database" 1723ms (00:22:00.884)
	Trace[1618303523]: [1.7243066s] [1.7243066s] END
	I0817 00:22:41.885543       1 trace.go:205] Trace[870768292]: "Create" url:/apis/storage.k8s.io/v1/storageclasses,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (17-Aug-2021 00:22:40.148) (total time: 1736ms):
	Trace[870768292]: ---"Object stored in database" 1734ms (00:22:00.884)
	Trace[870768292]: [1.7369791s] [1.7369791s] END
	I0817 00:22:41.886558       1 trace.go:205] Trace[536190284]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-cert-options-20210817001948-111344,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (17-Aug-2021 00:22:40.172) (total time: 1714ms):
	Trace[536190284]: ---"About to write a response" 1712ms (00:22:00.885)
	Trace[536190284]: [1.7140775s] [1.7140775s] END
	I0817 00:22:44.568680       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0817 00:22:45.155255       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [4929f4a13cb9] <==
	* 	/usr/local/go/src/io/io.go:328 +0x87
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000f783b8, 0x9, 0x9, 0x5007a00, 0xc0006842a0, 0x0, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000f78380, 0xc0004e80f0, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000f32fa8, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1819 +0xd8
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000f2cf00)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1741 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5
	
	goroutine 166 [runnable]:
	net/http.setRequestCancel.func4(0x0, 0xc000f27ad0, 0xc000f18410, 0xc000f1e51c, 0xc000f1cc00)
		/usr/local/go/src/net/http/client.go:397 +0x96
	created by net/http.setRequestCancel
		/usr/local/go/src/net/http/client.go:396 +0x337
	
	goroutine 30 [runnable]:
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientStream).awaitRequestCancel(0xc00039ef20, 0xc000e90e00)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:343
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).handleResponse
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2054 +0x728
	
	* 
	* ==> kube-controller-manager [84f5856f4a03] <==
	* I0817 00:22:44.049234       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0817 00:22:44.094879       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0817 00:22:44.100283       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0817 00:22:44.100513       1 shared_informer.go:247] Caches are synced for PV protection 
	I0817 00:22:44.049247       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0817 00:22:44.141268       1 shared_informer.go:247] Caches are synced for attach detach 
	I0817 00:22:44.166336       1 range_allocator.go:373] Set node cert-options-20210817001948-111344 PodCIDR to [10.244.0.0/24]
	I0817 00:22:44.168084       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0817 00:22:44.250867       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-cert-options-20210817001948-111344" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 00:22:44.259748       1 event.go:291] "Event occurred" object="kube-system/etcd-cert-options-20210817001948-111344" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 00:22:44.265763       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-cert-options-20210817001948-111344" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 00:22:44.265806       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-cert-options-20210817001948-111344" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 00:22:44.282256       1 shared_informer.go:247] Caches are synced for deployment 
	I0817 00:22:44.343452       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 00:22:44.347932       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 00:22:44.371398       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0817 00:22:44.377129       1 shared_informer.go:247] Caches are synced for disruption 
	I0817 00:22:44.377156       1 disruption.go:371] Sending events to api server.
	I0817 00:22:44.732282       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 00:22:44.732337       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 00:22:44.755905       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-krqwh"
	I0817 00:22:44.771694       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 00:22:45.214518       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 1"
	I0817 00:22:45.345577       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-nrwk5"
	I0817 00:22:49.072645       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [c7ac30fbe595] <==
	* I0817 00:22:48.775544       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0817 00:22:48.776572       1 server_others.go:140] Detected node IP 192.168.58.2
	W0817 00:22:48.778416       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0817 00:22:49.025111       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 00:22:49.025184       1 server_others.go:212] Using iptables Proxier.
	I0817 00:22:49.025202       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 00:22:49.025233       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 00:22:49.025740       1 server.go:643] Version: v1.21.3
	I0817 00:22:49.029390       1 config.go:224] Starting endpoint slice config controller
	I0817 00:22:49.029445       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0817 00:22:49.029774       1 config.go:315] Starting service config controller
	I0817 00:22:49.029789       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0817 00:22:49.106590       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0817 00:22:49.131343       1 shared_informer.go:247] Caches are synced for service config 
	I0817 00:22:49.131478       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	W0817 00:22:49.144172       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [9eee2ea01d3d] <==
	* E0817 00:22:18.596487       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 00:22:18.669498       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 00:22:18.676294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 00:22:18.687371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 00:22:18.728636       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 00:22:18.825971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 00:22:18.883828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 00:22:18.907761       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 00:22:19.071537       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 00:22:19.114375       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 00:22:19.152459       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 00:22:19.156396       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 00:22:20.247877       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 00:22:20.793587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 00:22:20.850722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 00:22:20.923998       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 00:22:20.991820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 00:22:21.113171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 00:22:21.284219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 00:22:21.289395       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 00:22:21.563547       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 00:22:21.622995       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 00:22:21.687098       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 00:22:21.872974       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0817 00:22:26.112706       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 00:20:06 UTC, end at Tue 2021-08-17 00:23:02 UTC. --
	Aug 17 00:22:34 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:34.709904    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9d4e7e0edfcddea65d8692c49329c91e-etcd-data\") pod \"etcd-cert-options-20210817001948-111344\" (UID: \"9d4e7e0edfcddea65d8692c49329c91e\") "
	Aug 17 00:22:34 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:34.710237    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78a45aa3515e5719c5b0bba5b17e494c-etc-ca-certificates\") pod \"kube-apiserver-cert-options-20210817001948-111344\" (UID: \"78a45aa3515e5719c5b0bba5b17e494c\") "
	Aug 17 00:22:34 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:34.710664    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78a45aa3515e5719c5b0bba5b17e494c-usr-share-ca-certificates\") pod \"kube-apiserver-cert-options-20210817001948-111344\" (UID: \"78a45aa3515e5719c5b0bba5b17e494c\") "
	Aug 17 00:22:34 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:34.710910    2868 reconciler.go:157] "Reconciler: start to sync state"
	Aug 17 00:22:44 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:44.187417    2868 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 17 00:22:44 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:44.251703    2868 docker_service.go:359] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Aug 17 00:22:44 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:44.267860    2868 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 17 00:22:45 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:45.002670    2868 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 00:22:45 cert-options-20210817001948-111344 kubelet[2868]: W0817 00:22:45.089540    2868 container.go:586] Failed to update stats for container "/kubepods/besteffort/podcd07d173-52c1-4734-96d2-06e1268109fc": /sys/fs/cgroup/cpuset/kubepods/besteffort/podcd07d173-52c1-4734-96d2-06e1268109fc/cpuset.cpus found to be empty, continuing to push stats
	Aug 17 00:22:45 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:45.162762    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd07d173-52c1-4734-96d2-06e1268109fc-kube-proxy\") pod \"kube-proxy-krqwh\" (UID: \"cd07d173-52c1-4734-96d2-06e1268109fc\") "
	Aug 17 00:22:45 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:45.162841    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cfv8\" (UniqueName: \"kubernetes.io/projected/cd07d173-52c1-4734-96d2-06e1268109fc-kube-api-access-8cfv8\") pod \"kube-proxy-krqwh\" (UID: \"cd07d173-52c1-4734-96d2-06e1268109fc\") "
	Aug 17 00:22:45 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:45.162882    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd07d173-52c1-4734-96d2-06e1268109fc-xtables-lock\") pod \"kube-proxy-krqwh\" (UID: \"cd07d173-52c1-4734-96d2-06e1268109fc\") "
	Aug 17 00:22:45 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:45.165321    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd07d173-52c1-4734-96d2-06e1268109fc-lib-modules\") pod \"kube-proxy-krqwh\" (UID: \"cd07d173-52c1-4734-96d2-06e1268109fc\") "
	Aug 17 00:22:45 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:45.442704    2868 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 00:22:45 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:45.578954    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15dc80f1-9078-4cec-902e-cfacdcf86eff-config-volume\") pod \"coredns-558bd4d5db-nrwk5\" (UID: \"15dc80f1-9078-4cec-902e-cfacdcf86eff\") "
	Aug 17 00:22:45 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:45.579125    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82fd5\" (UniqueName: \"kubernetes.io/projected/15dc80f1-9078-4cec-902e-cfacdcf86eff-kube-api-access-82fd5\") pod \"coredns-558bd4d5db-nrwk5\" (UID: \"15dc80f1-9078-4cec-902e-cfacdcf86eff\") "
	Aug 17 00:22:46 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:46.341115    2868 topology_manager.go:187] "Topology Admit Handler"
	Aug 17 00:22:46 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:46.409779    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ss8v\" (UniqueName: \"kubernetes.io/projected/06d2be9a-9abd-4266-99b2-cc86b809deba-kube-api-access-8ss8v\") pod \"storage-provisioner\" (UID: \"06d2be9a-9abd-4266-99b2-cc86b809deba\") "
	Aug 17 00:22:46 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:46.409891    2868 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/06d2be9a-9abd-4266-99b2-cc86b809deba-tmp\") pod \"storage-provisioner\" (UID: \"06d2be9a-9abd-4266-99b2-cc86b809deba\") "
	Aug 17 00:22:48 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:48.427738    2868 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7741798aedb194cde7831ee95b94db5d62ce64dab8c5de29dabff70f5e52d4ee"
	Aug 17 00:22:48 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:48.440158    2868 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-558bd4d5db-nrwk5 through plugin: invalid network status for"
	Aug 17 00:22:48 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:48.521554    2868 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="baaee86ea660f7ea4196bde9c3953b4774e6f2363ddbf13e6b133767bf657666"
	Aug 17 00:22:49 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:49.984756    2868 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-558bd4d5db-nrwk5 through plugin: invalid network status for"
	Aug 17 00:22:51 cert-options-20210817001948-111344 kubelet[2868]: I0817 00:22:51.029420    2868 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-558bd4d5db-nrwk5 through plugin: invalid network status for"
	Aug 17 00:22:54 cert-options-20210817001948-111344 kubelet[2868]: E0817 00:22:54.666785    2868 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podcd07d173-52c1-4734-96d2-06e1268109fc\": RecentStats: unable to find data in memory cache]"
	
	* 
	* ==> storage-provisioner [0010c0c9cd08] <==
	* I0817 00:22:49.807720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-options-20210817001948-111344 -n cert-options-20210817001948-111344
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-options-20210817001948-111344 -n cert-options-20210817001948-111344: (4.7561156s)
helpers_test.go:262: (dbg) Run:  kubectl --context cert-options-20210817001948-111344 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestCertOptions]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context cert-options-20210817001948-111344 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context cert-options-20210817001948-111344 describe pod : exit status 1 (194.9171ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context cert-options-20210817001948-111344 describe pod : exit status 1
helpers_test.go:176: Cleaning up "cert-options-20210817001948-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20210817001948-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20210817001948-111344: (19.8645087s)
--- FAIL: TestCertOptions (220.39s)

                                                
                                    
x
+
TestInsufficientStorage (35.26s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20210817001044-111344 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20210817001044-111344 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (22.6601783s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210817001044-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"bf0813be-57b8-47a2-872b-afacb380c3dd","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=C:\\Users\\jenkins\\minikube-integration\\kubeconfig"},"datacontenttype":"application/json","id":"e9f3b673-bbd0-416b-a948-fbba22119bac","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins\\minikube-integration\\.minikube"},"datacontenttype":"application/json","id":"b1337e0f-b87a-4454-84bc-018f1c21d38b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"7b84890e-ba9a-4232-98df-a71bfefa14ec","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"8df02a58-2b5a-49c7-a8de-df7fc2b2ef3c","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"cae24729-572a-4490-81fe-d7175ba9fcc7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210817001044-111344 in cluster insufficient-storage-20210817001044-111344","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"d551faa6-d136-48fa-851e-f748d39cc487","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"dd4b8ec2-ca2b-43cf-85af-151b5e17c09f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"5fedc900-16b1-4a83-9f5f-e3dbd17447b4","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"e5f0907f-3232-4525-97ea-7accac441010","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20210817001044-111344 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20210817001044-111344 --output=json --layout=cluster: exit status 7 (4.1495685s)

                                                
                                                
-- stdout --
	{"data":{"message":"Executing \"docker container inspect insufficient-storage-20210817001044-111344 --format={{.State.Status}}\" took an unusually long time: 2.2992165s"},"datacontenttype":"application/json","id":"5d5972cc-0ce8-4598-9f8a-c3f42c680e77","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"Restarting the docker service may improve performance."},"datacontenttype":"application/json","id":"4f0b8b04-f259-4e8d-94e5-2f8167b1df8a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}
	{"Name":"insufficient-storage-20210817001044-111344","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210817001044-111344","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 00:11:10.961100   16320 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210817001044-111344" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:87: unmarshalling: invalid character '{' after top-level value
helpers_test.go:176: Cleaning up "insufficient-storage-20210817001044-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20210817001044-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20210817001044-111344: (8.4514781s)
--- FAIL: TestInsufficientStorage (35.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (276.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210817001119-111344 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210817001119-111344 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker: (3m28.3700586s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20210817001119-111344

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20210817001119-111344: (21.2251827s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20210817001119-111344 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20210817001119-111344 status --format={{.Host}}: exit status 7 (3.1891877s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect kubernetes-upgrade-20210817001119-111344 --format={{.State.Status}}" took an unusually long time: 2.6417608s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210817001119-111344 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210817001119-111344 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker: exit status 80 (27.7710071s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210817001119-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20210817001119-111344 in cluster kubernetes-upgrade-20210817001119-111344
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-20210817001119-111344" ...
	* Restarting existing docker container for "kubernetes-upgrade-20210817001119-111344" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 00:15:12.689208  106960 out.go:298] Setting OutFile to fd 3332 ...
	I0817 00:15:12.693160  106960 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:15:12.693160  106960 out.go:311] Setting ErrFile to fd 2348...
	I0817 00:15:12.693160  106960 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:15:12.733661  106960 out.go:305] Setting JSON to false
	I0817 00:15:12.750138  106960 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8367359,"bootTime":1620791953,"procs":149,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0817 00:15:12.750138  106960 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0817 00:15:12.754421  106960 out.go:177] * [kubernetes-upgrade-20210817001119-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0817 00:15:12.755569  106960 notify.go:169] Checking for updates...
	I0817 00:15:12.758115  106960 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:15:12.760965  106960 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0817 00:15:12.770003  106960 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 00:15:12.772864  106960 config.go:177] Loaded profile config "kubernetes-upgrade-20210817001119-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I0817 00:15:12.775463  106960 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 00:15:15.299764  106960 docker.go:132] docker version: linux-20.10.2
	I0817 00:15:15.325989  106960 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:15:16.663945  106960 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.3376182s)
	I0817 00:15:16.665485  106960 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:15:16.0219291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:15:16.668369  106960 out.go:177] * Using the docker driver based on existing profile
	I0817 00:15:16.668654  106960 start.go:278] selected driver: docker
	I0817 00:15:16.668654  106960 start.go:751] validating driver "docker" against &{Name:kubernetes-upgrade-20210817001119-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210817001119-111344 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:15:16.669001  106960 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 00:15:16.911320  106960 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:15:18.155166  106960 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2434669s)
	I0817 00:15:18.156083  106960 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:15:17.6010669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:15:18.157028  106960 cni.go:93] Creating CNI manager for ""
	I0817 00:15:18.157346  106960 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:15:18.157346  106960 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210817001119-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210817001119-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:15:18.164474  106960 out.go:177] * Starting control plane node kubernetes-upgrade-20210817001119-111344 in cluster kubernetes-upgrade-20210817001119-111344
	I0817 00:15:18.164764  106960 cache.go:117] Beginning downloading kic base image for docker with docker
	I0817 00:15:18.167242  106960 out.go:177] * Pulling base image ...
	I0817 00:15:18.167732  106960 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0817 00:15:18.168024  106960 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 00:15:18.168220  106960 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0817 00:15:18.168220  106960 cache.go:56] Caching tarball of preloaded images
	I0817 00:15:18.173577  106960 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0817 00:15:18.173821  106960 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on docker
	I0817 00:15:18.174369  106960 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\kubernetes-upgrade-20210817001119-111344\config.json ...
	I0817 00:15:18.951373  106960 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 00:15:18.951373  106960 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 00:15:18.951373  106960 cache.go:205] Successfully downloaded all kic artifacts
	I0817 00:15:18.951373  106960 start.go:313] acquiring machines lock for kubernetes-upgrade-20210817001119-111344: {Name:mkd9283fa2f1b3b972171104bec0702df88f1fb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:15:18.951373  106960 start.go:317] acquired machines lock for "kubernetes-upgrade-20210817001119-111344" in 0s
	I0817 00:15:18.951373  106960 start.go:93] Skipping create...Using existing machine configuration
	I0817 00:15:18.951373  106960 fix.go:55] fixHost starting: 
	I0817 00:15:18.977161  106960 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817001119-111344 --format={{.State.Status}}
	I0817 00:15:19.706036  106960 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210817001119-111344: state=Stopped err=<nil>
	W0817 00:15:19.706492  106960 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 00:15:19.708676  106960 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20210817001119-111344" ...
	I0817 00:15:19.730802  106960 cli_runner.go:115] Run: docker start kubernetes-upgrade-20210817001119-111344
	W0817 00:15:20.580646  106960 cli_runner.go:162] docker start kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	I0817 00:15:20.588276  106960 cli_runner.go:115] Run: docker inspect kubernetes-upgrade-20210817001119-111344
	I0817 00:15:21.189527  106960 errors.go:84] Postmortem inspect ("docker inspect kubernetes-upgrade-20210817001119-111344"): -- stdout --
	[
	    {
	        "Id": "2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da",
	        "Created": "2021-08-17T00:11:33.2667132Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network 76fe7f8c8a06cbce45c22be0496564774e7468502c41c92f5937f12e17cdef08 not found",
	            "StartedAt": "2021-08-17T00:11:35.924797Z",
	            "FinishedAt": "2021-08-17T00:15:05.3732415Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/hosts",
	        "LogPath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da-json.log",
	        "Name": "/kubernetes-upgrade-20210817001119-111344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20210817001119-111344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20210817001119-111344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53-init/diff:/var/lib/docker/overlay2/e167e57d4b442602b2435f5ffd2147b1da53de34df49d96ce69565867fcf3850/diff:/var/lib/docker/overlay2/dbfef15a73962254d5bcc2c91a409021fc3573c3135096621d707c6f4feaac7d/diff:/var/lib/docker/overlay2/7fc44848dc580276135d9db2b62ce047cfba1909de5e91acbe8c1a5fc8fb3649/diff:/var/lib/docker/overlay2/493996ff2d6a75ef70db2749dded6936397fe536c32e28dda979b8af93e19f13/diff:/var/lib/docker/overlay2/b862553905dec6f42a41351a012fdce386251d97160f74f6b1feb3b455e1f53a/diff:/var/lib/docker/overlay2/517a8b2830d9e81ff950c8305063a6681219abbb7b22f3a87587fa819a0728ed/diff:/var/lib/docker/overlay2/f2b268080cfd9bbb64731ea6b7cb2ec64077e6c2701c2ab6e8b358a541056c5d/diff:/var/lib/docker/overlay2/ee5e612696333c681900cad605a1f678e9114e9c7ecf70717fad21aea1e52992/diff:/var/lib/docker/overlay2/6f44289af0b09a02645c237aabeff61487c57040b9531c0f7bd97517308bfd57/diff:/var/lib/docker/overlay2/f98f67
21a411bacf9d310d4d4405fbd528fa90d60af5ffabda9d55cef9ef3033/diff:/var/lib/docker/overlay2/8bc2e0f6b7c2aeccc6a944f316dbac5672f8685cc5dd5d3c2fc4bd370db4949f/diff:/var/lib/docker/overlay2/ef9e793c1e243004ff088f210369994837eb19a8abd21cf93f75257155445f16/diff:/var/lib/docker/overlay2/48fa7f37fc37f8220a31f4294bc800ef7a33c53c10bdc23d7dc68f27cfe4e535/diff:/var/lib/docker/overlay2/54bc5e0e0c32fdc66ce3eeb345721201a63a0c878d4665607246cd4aa5af61e5/diff:/var/lib/docker/overlay2/398c3fc63254fcc564086ced0eb7211f2d474f8bbdcd43ee27fd609e767c44a6/diff:/var/lib/docker/overlay2/796acb5b93384da004a8065a332cbb07c952569bdd7bb5e551b218e4c5c61f73/diff:/var/lib/docker/overlay2/d90baef87ad95bdfb14a2f35e4cb62336e18c21eb934266f43bfbe017252b857/diff:/var/lib/docker/overlay2/c16752decc8ef06fce4eebdf4ff4725414f3aa80cccd7b3ffdc325095930c0b4/diff:/var/lib/docker/overlay2/a679084eec181b0e1408e573d1ac08c47af1fd8266eb5884bf1a38d5ba0ddbbc/diff:/var/lib/docker/overlay2/15becb79b0d40211562ae33ddc5ec776276b9ae42c8a9f4645dcc6442b36f771/diff:/var/lib/d
ocker/overlay2/068a9a5dce1094eb72788237bd9cda4c76345774d5e647f0af81302a75861f4a/diff:/var/lib/docker/overlay2/74b9e9d807e09734ee96c76bc67adc56c9e3286b39315f89f6747c8c917ad2e5/diff:/var/lib/docker/overlay2/75de8e4895a0b4efe563705c06184db384b5c40154856b9bca2106a8d59fc151/diff:/var/lib/docker/overlay2/cbca3c40b21fee2ef276744168492f17203934aca8de4b459edae2fa55b6bb02/diff:/var/lib/docker/overlay2/584d28a6308bb998bd89d7d92c45b57b9dd66de472d166972d2f5195afd9dd44/diff:/var/lib/docker/overlay2/9c722118749c036eb2d00ba5a6925c5f32b121d64974c99e2de552b26a8bb7cd/diff:/var/lib/docker/overlay2/24908c792743f57c182587c66263f074ed86ae7c5812c631dea82d8ec6650e81/diff:/var/lib/docker/overlay2/9a8de59bfb816b3fc2f0fd522ef966196534483b5e87aafd180dd8b07e9c3582/diff:/var/lib/docker/overlay2/df46d170084213da519dea7e0f402d51272dc10df4d7cd7f37c528c411ac7000/diff:/var/lib/docker/overlay2/36b86a6f515e5882426e598755bb77d43cc340fd20798dfd0a810cd2ab96eeb6/diff:/var/lib/docker/overlay2/b54ac02f70047359cd143a32f862d18498cb556877ccfd252defb9d17fc
9d9f5/diff:/var/lib/docker/overlay2/971b77d080920997e1d0d0936f312a9a322ccd6ab9920c83a8eb5d14b93c3849/diff:/var/lib/docker/overlay2/5b5c21ae360c7e0738c0048bc3fe8d7d3cc0640d266660121f3968f675f42063/diff:/var/lib/docker/overlay2/e07bf2561a99ba47435b8f84b267268e02e9e4ff47832bd5054ee28bb1ca5001/diff:/var/lib/docker/overlay2/0c560be48f01814af21ec54fc79ea5e8db28f05e967a17b331be28ad61c75483/diff:/var/lib/docker/overlay2/27930667f3fd0fd38c13a39c0590c03a2c3b3ba04f0a3c946167be6a40f50c46/diff",
	                "MergedDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53/merged",
	                "UpperDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53/diff",
	                "WorkDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20210817001119-111344",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20210817001119-111344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20210817001119-111344",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20210817001119-111344",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20210817001119-111344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e7fda3ade8792c0aba0069fda9edea4aa344e0c9943b3ec20c40c66d95a42b9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/5e7fda3ade87",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20210817001119-111344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2c252d82645a",
	                        "kubernetes-upgrade-20210817001119-111344"
	                    ],
	                    "NetworkID": "76fe7f8c8a06cbce45c22be0496564774e7468502c41c92f5937f12e17cdef08",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0817 00:15:21.196604  106960 cli_runner.go:115] Run: docker logs --timestamps --details kubernetes-upgrade-20210817001119-111344
	I0817 00:15:21.739602  106960 errors.go:91] Postmortem logs ("docker logs --timestamps --details kubernetes-upgrade-20210817001119-111344"): -- stdout --
	2021-08-17T00:11:35.908761000Z  + configure_containerd
	2021-08-17T00:11:35.908805200Z  ++ stat -f -c %T /kind
	2021-08-17T00:11:35.913670800Z  + [[ overlayfs == \z\f\s ]]
	2021-08-17T00:11:35.917897300Z  + configure_proxy
	2021-08-17T00:11:35.917918100Z  + mkdir -p /etc/systemd/system.conf.d/
	2021-08-17T00:11:35.917923900Z  + [[ ! -z '' ]]
	2021-08-17T00:11:35.917928600Z  + cat
	2021-08-17T00:11:35.924368600Z  + fix_kmsg
	2021-08-17T00:11:35.925780100Z  + [[ ! -e /dev/kmsg ]]
	2021-08-17T00:11:35.926379900Z  + fix_mount
	2021-08-17T00:11:35.929989100Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2021-08-17T00:11:35.931605600Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2021-08-17T00:11:35.932555900Z  ++ which mount
	2021-08-17T00:11:35.937233900Z  ++ which umount
	2021-08-17T00:11:35.941130600Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2021-08-17T00:11:35.990840200Z  ++ which mount
	2021-08-17T00:11:35.994161900Z  ++ which umount
	2021-08-17T00:11:36.010401600Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2021-08-17T00:11:36.016596000Z  +++ which mount
	2021-08-17T00:11:36.021522700Z  ++ stat -f -c %T /usr/bin/mount
	2021-08-17T00:11:36.034160300Z  + [[ overlayfs == \a\u\f\s ]]
	2021-08-17T00:11:36.034178900Z  + echo 'INFO: remounting /sys read-only'
	2021-08-17T00:11:36.034184100Z  INFO: remounting /sys read-only
	2021-08-17T00:11:36.034189200Z  + mount -o remount,ro /sys
	2021-08-17T00:11:36.036163400Z  + echo 'INFO: making mounts shared'
	2021-08-17T00:11:36.036552500Z  INFO: making mounts shared
	2021-08-17T00:11:36.039947000Z  + mount --make-rshared /
	2021-08-17T00:11:36.045381500Z  + retryable_fix_cgroup
	2021-08-17T00:11:36.052099000Z  ++ seq 0 10
	2021-08-17T00:11:36.058791200Z  + for i in $(seq 0 10)
	2021-08-17T00:11:36.058810100Z  + fix_cgroup
	2021-08-17T00:11:36.058815400Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2021-08-17T00:11:36.058819900Z  + echo 'INFO: detected cgroup v1'
	2021-08-17T00:11:36.058824300Z  INFO: detected cgroup v1
	2021-08-17T00:11:36.058828800Z  + echo 'INFO: fix cgroup mounts for all subsystems'
	2021-08-17T00:11:36.058833300Z  INFO: fix cgroup mounts for all subsystems
	2021-08-17T00:11:36.058837600Z  + local current_cgroup
	2021-08-17T00:11:36.059089500Z  ++ cut -d: -f3
	2021-08-17T00:11:36.062062200Z  ++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
	2021-08-17T00:11:36.080013000Z  + current_cgroup=/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.080036200Z  + local cgroup_subsystems
	2021-08-17T00:11:36.083395500Z  ++ findmnt -lun -o source,target -t cgroup
	2021-08-17T00:11:36.083542000Z  ++ grep /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.086238200Z  ++ awk '{print $2}'
	2021-08-17T00:11:36.099551900Z  + cgroup_subsystems='/sys/fs/cgroup/cpuset
	2021-08-17T00:11:36.099562900Z  /sys/fs/cgroup/cpu
	2021-08-17T00:11:36.099568000Z  /sys/fs/cgroup/cpuacct
	2021-08-17T00:11:36.099572500Z  /sys/fs/cgroup/blkio
	2021-08-17T00:11:36.099578000Z  /sys/fs/cgroup/memory
	2021-08-17T00:11:36.099582300Z  /sys/fs/cgroup/devices
	2021-08-17T00:11:36.099586500Z  /sys/fs/cgroup/freezer
	2021-08-17T00:11:36.099591000Z  /sys/fs/cgroup/net_cls
	2021-08-17T00:11:36.099595500Z  /sys/fs/cgroup/perf_event
	2021-08-17T00:11:36.099599900Z  /sys/fs/cgroup/net_prio
	2021-08-17T00:11:36.099604200Z  /sys/fs/cgroup/hugetlb
	2021-08-17T00:11:36.099608400Z  /sys/fs/cgroup/pids
	2021-08-17T00:11:36.099612700Z  /sys/fs/cgroup/systemd'
	2021-08-17T00:11:36.099618000Z  + local cgroup_mounts
	2021-08-17T00:11:36.106396600Z  ++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
	2021-08-17T00:11:36.118124900Z  + cgroup_mounts='/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:173 master:18 - cgroup
	2021-08-17T00:11:36.118141100Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:174 master:19 - cgroup
	2021-08-17T00:11:36.118148100Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:175 master:20 - cgroup
	2021-08-17T00:11:36.118522700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:176 master:21 - cgroup
	2021-08-17T00:11:36.118530500Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:177 master:22 - cgroup
	2021-08-17T00:11:36.118535700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:178 master:23 - cgroup
	2021-08-17T00:11:36.118540700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:186 master:24 - cgroup
	2021-08-17T00:11:36.118546000Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:187 master:25 - cgroup
	2021-08-17T00:11:36.118551200Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:188 master:26 - cgroup
	2021-08-17T00:11:36.118556300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:189 master:27 - cgroup
	2021-08-17T00:11:36.118857500Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:190 master:28 - cgroup
	2021-08-17T00:11:36.118869000Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:191 master:29 - cgroup
	2021-08-17T00:11:36.118874900Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:193 master:31 - cgroup cgroup'
	2021-08-17T00:11:36.118995600Z  + [[ -n /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:173 master:18 - cgroup
	2021-08-17T00:11:36.119008500Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:174 master:19 - cgroup
	2021-08-17T00:11:36.119014200Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:175 master:20 - cgroup
	2021-08-17T00:11:36.119019300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:176 master:21 - cgroup
	2021-08-17T00:11:36.119024200Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:177 master:22 - cgroup
	2021-08-17T00:11:36.119029300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:178 master:23 - cgroup
	2021-08-17T00:11:36.119034400Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:186 master:24 - cgroup
	2021-08-17T00:11:36.119039500Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:187 master:25 - cgroup
	2021-08-17T00:11:36.119044900Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:188 master:26 - cgroup
	2021-08-17T00:11:36.119049700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:189 master:27 - cgroup
	2021-08-17T00:11:36.119054500Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:190 master:28 - cgroup
	2021-08-17T00:11:36.119059700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:191 master:29 - cgroup
	2021-08-17T00:11:36.119064600Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:193 master:31 - cgroup cgroup ]]
	2021-08-17T00:11:36.119069700Z  + local mount_root
	2021-08-17T00:11:36.119587000Z  ++ head -n 1
	2021-08-17T00:11:36.125014500Z  ++ cut '-d ' -f1
	2021-08-17T00:11:36.143436900Z  + mount_root=/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.149817100Z  ++ cut '-d ' -f 2
	2021-08-17T00:11:36.149837800Z  ++ echo '/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:173 master:18 - cgroup
	2021-08-17T00:11:36.149844700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:174 master:19 - cgroup
	2021-08-17T00:11:36.149849800Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:175 master:20 - cgroup
	2021-08-17T00:11:36.149855000Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:176 master:21 - cgroup
	2021-08-17T00:11:36.149860300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:177 master:22 - cgroup
	2021-08-17T00:11:36.149865300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:178 master:23 - cgroup
	2021-08-17T00:11:36.149870100Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:186 master:24 - cgroup
	2021-08-17T00:11:36.149875100Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:187 master:25 - cgroup
	2021-08-17T00:11:36.149879900Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:188 master:26 - cgroup
	2021-08-17T00:11:36.149884300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:189 master:27 - cgroup
	2021-08-17T00:11:36.149888700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:190 master:28 - cgroup
	2021-08-17T00:11:36.149893200Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:191 master:29 - cgroup
	2021-08-17T00:11:36.149898300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:193 master:31 - cgroup cgroup'
	2021-08-17T00:11:36.156907800Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.156931500Z  + local target=/sys/fs/cgroup/cpuset/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.156938000Z  + findmnt /sys/fs/cgroup/cpuset/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.178971500Z  + mkdir -p /sys/fs/cgroup/cpuset/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.189082000Z  + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.201372500Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.201404200Z  + local target=/sys/fs/cgroup/cpu/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.201421500Z  + findmnt /sys/fs/cgroup/cpu/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.210825800Z  + mkdir -p /sys/fs/cgroup/cpu/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.218168800Z  + mount --bind /sys/fs/cgroup/cpu /sys/fs/cgroup/cpu/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.225229500Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.225790900Z  + local target=/sys/fs/cgroup/cpuacct/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.225804700Z  + findmnt /sys/fs/cgroup/cpuacct/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.234489700Z  + mkdir -p /sys/fs/cgroup/cpuacct/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.234514200Z  + mount --bind /sys/fs/cgroup/cpuacct /sys/fs/cgroup/cpuacct/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.243090000Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.243103000Z  + local target=/sys/fs/cgroup/blkio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.243108700Z  + findmnt /sys/fs/cgroup/blkio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.246084300Z  + mkdir -p /sys/fs/cgroup/blkio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.257379900Z  + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.272139900Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.272167000Z  + local target=/sys/fs/cgroup/memory/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.272173300Z  + findmnt /sys/fs/cgroup/memory/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.285358000Z  + mkdir -p /sys/fs/cgroup/memory/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.292104600Z  + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.297044800Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.297073600Z  + local target=/sys/fs/cgroup/devices/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.297080100Z  + findmnt /sys/fs/cgroup/devices/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.313235900Z  + mkdir -p /sys/fs/cgroup/devices/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.314905200Z  + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.323774100Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.323793500Z  + local target=/sys/fs/cgroup/freezer/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.323799600Z  + findmnt /sys/fs/cgroup/freezer/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.333168500Z  + mkdir -p /sys/fs/cgroup/freezer/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.337965600Z  + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.346097700Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.346120400Z  + local target=/sys/fs/cgroup/net_cls/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.346127000Z  + findmnt /sys/fs/cgroup/net_cls/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.357361100Z  + mkdir -p /sys/fs/cgroup/net_cls/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.361175800Z  + mount --bind /sys/fs/cgroup/net_cls /sys/fs/cgroup/net_cls/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.373981600Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.374008200Z  + local target=/sys/fs/cgroup/perf_event/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.374014300Z  + findmnt /sys/fs/cgroup/perf_event/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.379072400Z  + mkdir -p /sys/fs/cgroup/perf_event/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.388236200Z  + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.394694400Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.394718500Z  + local target=/sys/fs/cgroup/net_prio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.394724900Z  + findmnt /sys/fs/cgroup/net_prio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.399036600Z  + mkdir -p /sys/fs/cgroup/net_prio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.404697900Z  + mount --bind /sys/fs/cgroup/net_prio /sys/fs/cgroup/net_prio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.407863900Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.408816800Z  + local target=/sys/fs/cgroup/hugetlb/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.409635600Z  + findmnt /sys/fs/cgroup/hugetlb/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.415031700Z  + mkdir -p /sys/fs/cgroup/hugetlb/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.422071600Z  + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.427948100Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.428591900Z  + local target=/sys/fs/cgroup/pids/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.428802900Z  + findmnt /sys/fs/cgroup/pids/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.437546700Z  + mkdir -p /sys/fs/cgroup/pids/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.445784600Z  + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.456576600Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.456600500Z  + local target=/sys/fs/cgroup/systemd/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.456709100Z  + findmnt /sys/fs/cgroup/systemd/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.464618100Z  + mkdir -p /sys/fs/cgroup/systemd/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.477032900Z  + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.484648600Z  + mount --make-rprivate /sys/fs/cgroup
	2021-08-17T00:11:36.491582100Z  + echo '/sys/fs/cgroup/cpuset
	2021-08-17T00:11:36.491603300Z  /sys/fs/cgroup/cpu
	2021-08-17T00:11:36.491608900Z  /sys/fs/cgroup/cpuacct
	2021-08-17T00:11:36.491614300Z  /sys/fs/cgroup/blkio
	2021-08-17T00:11:36.491618600Z  /sys/fs/cgroup/memory
	2021-08-17T00:11:36.491623200Z  /sys/fs/cgroup/devices
	2021-08-17T00:11:36.491627800Z  /sys/fs/cgroup/freezer
	2021-08-17T00:11:36.491739600Z  /sys/fs/cgroup/net_cls
	2021-08-17T00:11:36.491748200Z  /sys/fs/cgroup/perf_event
	2021-08-17T00:11:36.491753000Z  /sys/fs/cgroup/net_prio
	2021-08-17T00:11:36.491757500Z  /sys/fs/cgroup/hugetlb
	2021-08-17T00:11:36.491762000Z  /sys/fs/cgroup/pids
	2021-08-17T00:11:36.491766400Z  /sys/fs/cgroup/systemd'
	2021-08-17T00:11:36.491771100Z  + IFS=
	2021-08-17T00:11:36.491775300Z  + read -r subsystem
	2021-08-17T00:11:36.493534900Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
	2021-08-17T00:11:36.493555600Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.493561600Z  + local subsystem=/sys/fs/cgroup/cpuset
	2021-08-17T00:11:36.493566300Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.493571200Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet
	2021-08-17T00:11:36.501090600Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.501653100Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2021-08-17T00:11:36.507497000Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2021-08-17T00:11:36.512691000Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
	2021-08-17T00:11:36.521355300Z  + IFS=
	2021-08-17T00:11:36.521375600Z  + read -r subsystem
	2021-08-17T00:11:36.521381200Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu
	2021-08-17T00:11:36.521386300Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.521391100Z  + local subsystem=/sys/fs/cgroup/cpu
	2021-08-17T00:11:36.521395600Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.521400100Z  + mkdir -p /sys/fs/cgroup/cpu//kubelet
	2021-08-17T00:11:36.541636000Z  + '[' /sys/fs/cgroup/cpu == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.541663000Z  + mount --bind /sys/fs/cgroup/cpu//kubelet /sys/fs/cgroup/cpu//kubelet
	2021-08-17T00:11:36.547185100Z  + IFS=
	2021-08-17T00:11:36.547207400Z  + read -r subsystem
	2021-08-17T00:11:36.547213200Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuacct
	2021-08-17T00:11:36.547217900Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.547222700Z  + local subsystem=/sys/fs/cgroup/cpuacct
	2021-08-17T00:11:36.547227400Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.547232000Z  + mkdir -p /sys/fs/cgroup/cpuacct//kubelet
	2021-08-17T00:11:36.558099600Z  + '[' /sys/fs/cgroup/cpuacct == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.558126700Z  + mount --bind /sys/fs/cgroup/cpuacct//kubelet /sys/fs/cgroup/cpuacct//kubelet
	2021-08-17T00:11:36.562708700Z  + IFS=
	2021-08-17T00:11:36.562729400Z  + read -r subsystem
	2021-08-17T00:11:36.562734800Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
	2021-08-17T00:11:36.562740200Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.562744700Z  + local subsystem=/sys/fs/cgroup/blkio
	2021-08-17T00:11:36.562749800Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.562754900Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet
	2021-08-17T00:11:36.568378900Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.568400500Z  + mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
	2021-08-17T00:11:36.578231400Z  + IFS=
	2021-08-17T00:11:36.578364100Z  + read -r subsystem
	2021-08-17T00:11:36.578372600Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
	2021-08-17T00:11:36.578377700Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.578382300Z  + local subsystem=/sys/fs/cgroup/memory
	2021-08-17T00:11:36.578386900Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.578391400Z  + mkdir -p /sys/fs/cgroup/memory//kubelet
	2021-08-17T00:11:36.582830200Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.582854100Z  + mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
	2021-08-17T00:11:36.594874600Z  + IFS=
	2021-08-17T00:11:36.594896500Z  + read -r subsystem
	2021-08-17T00:11:36.594902400Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
	2021-08-17T00:11:36.594907600Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.594912100Z  + local subsystem=/sys/fs/cgroup/devices
	2021-08-17T00:11:36.594916600Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.594921100Z  + mkdir -p /sys/fs/cgroup/devices//kubelet
	2021-08-17T00:11:36.602910300Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.602934200Z  + mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
	2021-08-17T00:11:36.618866100Z  + IFS=
	2021-08-17T00:11:36.621146200Z  + read -r subsystem
	2021-08-17T00:11:36.621690800Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
	2021-08-17T00:11:36.623465100Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.623484400Z  + local subsystem=/sys/fs/cgroup/freezer
	2021-08-17T00:11:36.623490300Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.623495100Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet
	2021-08-17T00:11:36.627221500Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.627240900Z  + mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
	2021-08-17T00:11:36.657428200Z  + IFS=
	2021-08-17T00:11:36.657457200Z  + read -r subsystem
	2021-08-17T00:11:36.657463500Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls
	2021-08-17T00:11:36.657468600Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.657473200Z  + local subsystem=/sys/fs/cgroup/net_cls
	2021-08-17T00:11:36.657477900Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.658021400Z  + mkdir -p /sys/fs/cgroup/net_cls//kubelet
	2021-08-17T00:11:36.661064000Z  + '[' /sys/fs/cgroup/net_cls == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.661082500Z  + mount --bind /sys/fs/cgroup/net_cls//kubelet /sys/fs/cgroup/net_cls//kubelet
	2021-08-17T00:11:36.668615200Z  + IFS=
	2021-08-17T00:11:36.668637300Z  + read -r subsystem
	2021-08-17T00:11:36.668643300Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
	2021-08-17T00:11:36.668648600Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.668662400Z  + local subsystem=/sys/fs/cgroup/perf_event
	2021-08-17T00:11:36.668667700Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.668672100Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet
	2021-08-17T00:11:36.679080300Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.683528800Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
	2021-08-17T00:11:36.702910600Z  + IFS=
	2021-08-17T00:11:36.702940100Z  + read -r subsystem
	2021-08-17T00:11:36.702946000Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_prio
	2021-08-17T00:11:36.702951000Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.702955600Z  + local subsystem=/sys/fs/cgroup/net_prio
	2021-08-17T00:11:36.702960400Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.702964900Z  + mkdir -p /sys/fs/cgroup/net_prio//kubelet
	2021-08-17T00:11:36.709417300Z  + '[' /sys/fs/cgroup/net_prio == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.710764000Z  + mount --bind /sys/fs/cgroup/net_prio//kubelet /sys/fs/cgroup/net_prio//kubelet
	2021-08-17T00:11:36.722862100Z  + IFS=
	2021-08-17T00:11:36.723120200Z  + read -r subsystem
	2021-08-17T00:11:36.723130700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
	2021-08-17T00:11:36.723135800Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.723140300Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2021-08-17T00:11:36.723145000Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.723149500Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet
	2021-08-17T00:11:36.733738700Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.737586200Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
	2021-08-17T00:11:36.737607200Z  + IFS=
	2021-08-17T00:11:36.737613000Z  + read -r subsystem
	2021-08-17T00:11:36.737618500Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
	2021-08-17T00:11:36.737623500Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.737628300Z  + local subsystem=/sys/fs/cgroup/pids
	2021-08-17T00:11:36.737633200Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.737637600Z  + mkdir -p /sys/fs/cgroup/pids//kubelet
	2021-08-17T00:11:36.737650300Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.737656200Z  + mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
	2021-08-17T00:11:36.745410900Z  + IFS=
	2021-08-17T00:11:36.745440600Z  + read -r subsystem
	2021-08-17T00:11:36.745448000Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd
	2021-08-17T00:11:36.745613500Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.745626700Z  + local subsystem=/sys/fs/cgroup/systemd
	2021-08-17T00:11:36.745632500Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.745638400Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet
	2021-08-17T00:11:36.750871300Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.750895500Z  + mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet
	2021-08-17T00:11:36.764528400Z  + IFS=
	2021-08-17T00:11:36.764559500Z  + read -r subsystem
	2021-08-17T00:11:36.766332200Z  + return
	2021-08-17T00:11:36.766354900Z  + fix_machine_id
	2021-08-17T00:11:36.766361200Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2021-08-17T00:11:36.766366000Z  INFO: clearing and regenerating /etc/machine-id
	2021-08-17T00:11:36.766371000Z  + rm -f /etc/machine-id
	2021-08-17T00:11:36.768782600Z  + systemd-machine-id-setup
	2021-08-17T00:11:36.786567700Z  Initializing machine ID from D-Bus machine ID.
	2021-08-17T00:11:36.857614700Z  + fix_product_name
	2021-08-17T00:11:36.858089900Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2021-08-17T00:11:36.859058000Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2021-08-17T00:11:36.859075700Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2021-08-17T00:11:36.859081500Z  + echo kind
	2021-08-17T00:11:36.860211300Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2021-08-17T00:11:36.866057300Z  + fix_product_uuid
	2021-08-17T00:11:36.866079400Z  + [[ ! -f /kind/product_uuid ]]
	2021-08-17T00:11:36.866087300Z  + cat /proc/sys/kernel/random/uuid
	2021-08-17T00:11:36.876462100Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2021-08-17T00:11:36.876969900Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2021-08-17T00:11:36.877210200Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2021-08-17T00:11:36.879453000Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2021-08-17T00:11:36.886982100Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2021-08-17T00:11:36.888582400Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2021-08-17T00:11:36.888821600Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2021-08-17T00:11:36.888839000Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2021-08-17T00:11:36.894152700Z  + select_iptables
	2021-08-17T00:11:36.894575200Z  + local mode=nft
	2021-08-17T00:11:36.901100500Z  ++ grep '^-'
	2021-08-17T00:11:36.902450000Z  ++ wc -l
	2021-08-17T00:11:36.940975200Z  + num_legacy_lines=6
	2021-08-17T00:11:36.940998800Z  + '[' 6 -ge 10 ']'
	2021-08-17T00:11:36.953761800Z  ++ grep '^-'
	2021-08-17T00:11:36.954504000Z  ++ wc -l
	2021-08-17T00:11:36.990343000Z  ++ true
	2021-08-17T00:11:36.992024900Z  + num_nft_lines=0
	2021-08-17T00:11:36.992046100Z  + '[' 6 -ge 0 ']'
	2021-08-17T00:11:36.992051800Z  + mode=legacy
	2021-08-17T00:11:36.992056700Z  + echo 'INFO: setting iptables to detected mode: legacy'
	2021-08-17T00:11:36.992061600Z  INFO: setting iptables to detected mode: legacy
	2021-08-17T00:11:36.992077100Z  + update-alternatives --set iptables /usr/sbin/iptables-legacy
	2021-08-17T00:11:36.992082100Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
	2021-08-17T00:11:36.992086700Z  + local 'args=--set iptables /usr/sbin/iptables-legacy'
	2021-08-17T00:11:36.995845500Z  ++ seq 0 15
	2021-08-17T00:11:37.017240700Z  + for i in $(seq 0 15)
	2021-08-17T00:11:37.017792600Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
	2021-08-17T00:11:37.032853900Z  + return
	2021-08-17T00:11:37.033346400Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2021-08-17T00:11:37.036825500Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
	2021-08-17T00:11:37.038608400Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
	2021-08-17T00:11:37.040181400Z  ++ seq 0 15
	2021-08-17T00:11:37.045200000Z  + for i in $(seq 0 15)
	2021-08-17T00:11:37.046757300Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2021-08-17T00:11:37.064203500Z  + return
	2021-08-17T00:11:37.065062600Z  + enable_network_magic
	2021-08-17T00:11:37.065398900Z  + local docker_embedded_dns_ip=127.0.0.11
	2021-08-17T00:11:37.065618100Z  + local docker_host_ip
	2021-08-17T00:11:37.067453400Z  ++ cut '-d ' -f1
	2021-08-17T00:11:37.068831300Z  ++ head -n1 /dev/fd/63
	2021-08-17T00:11:37.072469700Z  +++ getent ahostsv4 host.docker.internal
	2021-08-17T00:11:37.110861200Z  + docker_host_ip=192.168.65.2
	2021-08-17T00:11:37.110891600Z  + [[ -z 192.168.65.2 ]]
	2021-08-17T00:11:37.110898100Z  + [[ 192.168.65.2 =~ ^127\.[0-9]+\.[0-9]+\.[0-9]+$ ]]
	2021-08-17T00:11:37.110903500Z  + iptables-restore
	2021-08-17T00:11:37.112176300Z  + iptables-save
	2021-08-17T00:11:37.123077100Z  + sed -e 's/-d 127.0.0.11/-d 192.168.65.2/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.65.2:53/g'
	2021-08-17T00:11:37.183064000Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2021-08-17T00:11:37.195216300Z  + sed -e s/127.0.0.11/192.168.65.2/g /etc/resolv.conf.original
	2021-08-17T00:11:37.215586600Z  ++ head -n1 /dev/fd/63
	2021-08-17T00:11:37.218734600Z  ++ cut '-d ' -f1
	2021-08-17T00:11:37.220367500Z  ++++ hostname
	2021-08-17T00:11:37.237765700Z  +++ getent ahostsv4 kubernetes-upgrade-20210817001119-111344
	2021-08-17T00:11:37.263163900Z  + curr_ipv4=192.168.67.2
	2021-08-17T00:11:37.270301000Z  + echo 'INFO: Detected IPv4 address: 192.168.67.2'
	2021-08-17T00:11:37.270323500Z  INFO: Detected IPv4 address: 192.168.67.2
	2021-08-17T00:11:37.270329300Z  + '[' -f /kind/old-ipv4 ']'
	2021-08-17T00:11:37.270366800Z  + [[ -n 192.168.67.2 ]]
	2021-08-17T00:11:37.270375600Z  + echo -n 192.168.67.2
	2021-08-17T00:11:37.277901800Z  ++ cut '-d ' -f1
	2021-08-17T00:11:37.278686700Z  ++ head -n1 /dev/fd/63
	2021-08-17T00:11:37.286903000Z  ++++ hostname
	2021-08-17T00:11:37.302063700Z  +++ getent ahostsv6 kubernetes-upgrade-20210817001119-111344
	2021-08-17T00:11:37.311130900Z  + curr_ipv6=
	2021-08-17T00:11:37.311154900Z  + echo 'INFO: Detected IPv6 address: '
	2021-08-17T00:11:37.311160900Z  INFO: Detected IPv6 address: 
	2021-08-17T00:11:37.312910700Z  + '[' -f /kind/old-ipv6 ']'
	2021-08-17T00:11:37.313342000Z  + [[ -n '' ]]
	2021-08-17T00:11:37.314897400Z  ++ uname -a
	2021-08-17T00:11:37.323158200Z  + echo 'entrypoint completed: Linux kubernetes-upgrade-20210817001119-111344 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux'
	2021-08-17T00:11:37.323319300Z  entrypoint completed: Linux kubernetes-upgrade-20210817001119-111344 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	2021-08-17T00:11:37.324639400Z  + exec /sbin/init
	2021-08-17T00:11:37.343374600Z  Failed to find module 'autofs4'
	2021-08-17T00:11:37.346156400Z  systemd 245.4-4ubuntu3.11 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
	2021-08-17T00:11:37.346179400Z  Detected virtualization docker.
	2021-08-17T00:11:37.346185200Z  Detected architecture x86-64.
	2021-08-17T00:11:37.347705700Z  Failed to create symlink /sys/fs/cgroup/net_prio: File exists
	2021-08-17T00:11:37.349642600Z  Failed to create symlink /sys/fs/cgroup/net_cls: File exists
	2021-08-17T00:11:37.354123300Z  Failed to create symlink /sys/fs/cgroup/cpuacct: File exists
	2021-08-17T00:11:37.354142500Z  Failed to create symlink /sys/fs/cgroup/cpu: File exists
	2021-08-17T00:11:37.356830300Z  
	2021-08-17T00:11:37.356963600Z  Welcome to Ubuntu 20.04.2 LTS!
	2021-08-17T00:11:37.356972300Z  
	2021-08-17T00:11:37.356977100Z  Set hostname to <kubernetes-upgrade-20210817001119-111344>.
	2021-08-17T00:11:37.713834300Z  [  OK  ] Started Dispatch Password …ts to Console Directory Watch.
	2021-08-17T00:11:37.713865900Z  [UNSUPP] Starting of Arbitrary Exec…Automount Point not supported.
	2021-08-17T00:11:37.713872400Z  [  OK  ] Reached target Local Encrypted Volumes.
	2021-08-17T00:11:37.713877600Z  [  OK  ] Reached target Network is Online.
	2021-08-17T00:11:37.713882300Z  [  OK  ] Reached target Paths.
	2021-08-17T00:11:37.713898200Z  [  OK  ] Reached target Slices.
	2021-08-17T00:11:37.713903700Z  [  OK  ] Reached target Swap.
	2021-08-17T00:11:37.717135500Z  [  OK  ] Listening on Journal Audit Socket.
	2021-08-17T00:11:37.717378800Z  [  OK  ] Listening on Journal Socket (/dev/log).
	2021-08-17T00:11:37.717399600Z  [  OK  ] Listening on Journal Socket.
	2021-08-17T00:11:37.728009100Z           Mounting Huge Pages File System...
	2021-08-17T00:11:37.750914900Z           Mounting Kernel Debug File System...
	2021-08-17T00:11:37.774184800Z           Mounting Kernel Trace File System...
	2021-08-17T00:11:37.806929000Z           Starting Journal Service...
	2021-08-17T00:11:37.859788400Z           Starting Create list of st…odes for the current kernel...
	2021-08-17T00:11:37.888594500Z           Mounting FUSE Control File System...
	2021-08-17T00:11:37.949831400Z           Starting Remount Root and Kernel File Systems...
	2021-08-17T00:11:37.989880900Z           Starting Apply Kernel Variables...
	2021-08-17T00:11:37.999456500Z  [  OK  ] Mounted Huge Pages File System.
	2021-08-17T00:11:37.999483700Z  [  OK  ] Mounted Kernel Debug File System.
	2021-08-17T00:11:38.000058400Z  [  OK  ] Mounted Kernel Trace File System.
	2021-08-17T00:11:38.000068000Z  [  OK  ] Finished Create list of st… nodes for the current kernel.
	2021-08-17T00:11:38.000073300Z  [  OK  ] Mounted FUSE Control File System.
	2021-08-17T00:11:38.049523800Z  [  OK  ] Finished Remount Root and Kernel File Systems.
	2021-08-17T00:11:38.071886600Z           Starting Create System Users...
	2021-08-17T00:11:38.081876000Z           Starting Update UTMP about System Boot/Shutdown...
	2021-08-17T00:11:38.154390500Z  [  OK  ] Finished Apply Kernel Variables.
	2021-08-17T00:11:38.185367800Z  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
	2021-08-17T00:11:38.188616700Z  [  OK  ] Started Journal Service.
	2021-08-17T00:11:38.199821300Z           Starting Flush Journal to Persistent Storage...
	2021-08-17T00:11:38.236726100Z  [  OK  ] Finished Flush Journal to Persistent Storage.
	2021-08-17T00:11:38.302172100Z  [  OK  ] Finished Create System Users.
	2021-08-17T00:11:38.322479600Z           Starting Create Static Device Nodes in /dev...
	2021-08-17T00:11:38.364212800Z  [  OK  ] Finished Create Static Device Nodes in /dev.
	2021-08-17T00:11:38.368553200Z  [  OK  ] Reached target Local File Systems (Pre).
	2021-08-17T00:11:38.368578100Z  [  OK  ] Reached target Local File Systems.
	2021-08-17T00:11:38.368584900Z  [  OK  ] Reached target System Initialization.
	2021-08-17T00:11:38.368590500Z  [  OK  ] Started Daily Cleanup of Temporary Directories.
	2021-08-17T00:11:38.368596300Z  [  OK  ] Reached target Timers.
	2021-08-17T00:11:38.368601600Z  [  OK  ] Listening on D-Bus System Message Bus Socket.
	2021-08-17T00:11:38.384399100Z           Starting Docker Socket for the API.
	2021-08-17T00:11:38.386150600Z           Starting Podman API Socket.
	2021-08-17T00:11:38.401086700Z  [  OK  ] Listening on Podman API Socket.
	2021-08-17T00:11:38.404581900Z  [  OK  ] Listening on Docker Socket for the API.
	2021-08-17T00:11:38.404604500Z  [  OK  ] Reached target Sockets.
	2021-08-17T00:11:38.404610900Z  [  OK  ] Reached target Basic System.
	2021-08-17T00:11:38.406960700Z           Starting containerd container runtime...
	2021-08-17T00:11:38.419473300Z  [  OK  ] Started D-Bus System Message Bus.
	2021-08-17T00:11:38.440784900Z           Starting minikube automount...
	2021-08-17T00:11:38.454375500Z           Starting OpenBSD Secure Shell server...
	2021-08-17T00:11:38.612632600Z  [  OK  ] Started OpenBSD Secure Shell server.
	2021-08-17T00:11:38.711655100Z  [  OK  ] Finished minikube automount.
	2021-08-17T00:11:38.999486100Z  [  OK  ] Started containerd container runtime.
	2021-08-17T00:11:38.999592200Z           Starting Docker Application Container Engine...
	2021-08-17T00:11:40.529098100Z  [  OK  ] Started Docker Application Container Engine.
	2021-08-17T00:11:40.532724200Z  [  OK  ] Reached target Multi-User System.
	2021-08-17T00:11:40.533061900Z  [  OK  ] Reached target Graphical Interface.
	2021-08-17T00:11:40.543638200Z           Starting Update UTMP about System Runlevel Changes...
	2021-08-17T00:11:40.580073200Z  [  OK  ] Finished Update UTMP about System Runlevel Changes.
	2021-08-17T00:14:52.425468600Z  [  OK  ] Stopped target Graphical Interface.
	2021-08-17T00:14:52.432382900Z  [  OK  ] Stopped target Multi-User System.
	2021-08-17T00:14:52.456562000Z  [  OK  ] Stopped target Timers.
	2021-08-17T00:14:52.458287200Z  [  OK  ] Stopped Daily Cleanup of Temporary Directories.
	2021-08-17T00:14:52.468150000Z           Stopping D-Bus System Message Bus...
	2021-08-17T00:14:52.469347700Z           Stopping Docker Application Container Engine...
	2021-08-17T00:14:52.472843700Z           Stopping kubelet: The Kubernetes Node Agent...
	2021-08-17T00:14:52.476889500Z           Stopping OpenBSD Secure Shell server...
	2021-08-17T00:14:52.566976300Z  [  OK  ] Stopped D-Bus System Message Bus.
	2021-08-17T00:14:52.640275500Z  [  OK  ] Stopped OpenBSD Secure Shell server.
	2021-08-17T00:14:53.071671600Z  [  OK  ] Stopped kubelet: The Kubernetes Node Agent.
	2021-08-17T00:14:54.498123800Z  [  OK  ] Unmounted /var/lib/docker/…69462313694250fe592391/merged.
	2021-08-17T00:14:54.912411600Z  [  OK  ] Unmounted /var/lib/docker/…1f94095558e9f6bd89/mounts/shm.
	2021-08-17T00:14:54.925199900Z  [  OK  ] Unmounted /var/lib/docker/…1c8f7f35f07c7a6a8d0ce2/merged.
	2021-08-17T00:14:54.932485000Z  [  OK  ] Unmounted /var/lib/docker/…47073a1711ef4c4e04e0b5/merged.
	2021-08-17T00:14:55.209232000Z  [  OK  ] Unmounted /var/lib/docker/…1bfd12ed1476983fc8/mounts/shm.
	2021-08-17T00:14:55.214209600Z  [  OK  ] Unmounted /var/lib/docker/…7e5e9d16f256d9c8cc0b33/merged.
	2021-08-17T00:14:55.274592100Z  [  OK  ] Unmounted /var/lib/docker/…f83fb2d6dd91a50660/mounts/shm.
	2021-08-17T00:14:55.284562700Z  [  OK  ] Unmounted /var/lib/docker/…91cabb007203dee80b6b12/merged.
	2021-08-17T00:14:55.352113800Z  [  OK  ] Unmounted /var/lib/docker/…e14bdbf4dd511904da/mounts/shm.
	2021-08-17T00:14:55.402188500Z  [  OK  ] Unmounted /var/lib/docker/…c8b9de2d1971e12bb86986/merged.
	2021-08-17T00:14:55.403292600Z  [  OK  ] Unmounted /var/lib/docker/…d1bdb89458dcaf057e32c9/merged.
	2021-08-17T00:14:57.622429600Z  [***   ] A stop job is running for Docker Ap…n Container Engine (5s / 1min 30s)
	2021-08-17T00:14:58.058978800Z  M
[ ***  ] A stop job is running for Docker Ap…Container Engine (41us / 1min 24s)
	2021-08-17T00:14:58.615189700Z  M
[  *** ] A stop job is running for Docker Ap…ontainer Engine (557ms / 1min 24s)
	2021-08-17T00:14:59.116322200Z  M
[   ***] A stop job is running for Docker Ap…n Container Engine (1s / 1min 24s)
	2021-08-17T00:14:59.618891500Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (1s / 1min 24s)
	2021-08-17T00:15:00.113894400Z  M
[     *] A stop job is running for Docker Ap…n Container Engine (2s / 1min 24s)
	2021-08-17T00:15:00.614658600Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (2s / 1min 24s)
	2021-08-17T00:15:01.114774800Z  M
[   ***] A stop job is running for Docker Ap…n Container Engine (3s / 1min 24s)
	2021-08-17T00:15:01.613082700Z  M
[  *** ] A stop job is running for Docker Ap…n Container Engine (3s / 1min 24s)
	2021-08-17T00:15:02.113879500Z  M
[ ***  ] A stop job is running for Docker Ap…n Container Engine (4s / 1min 24s)
	2021-08-17T00:15:02.613953700Z  M
[***   ] A stop job is running for Docker Ap…n Container Engine (4s / 1min 24s)
	2021-08-17T00:15:03.116308700Z  M
[**    ] A stop job is running for Docker Ap…n Container Engine (5s / 1min 24s)
	2021-08-17T00:15:03.614219200Z  M
[*     ] A stop job is running for Docker Ap…n Container Engine (5s / 1min 24s)
	2021-08-17T00:15:04.112928000Z  M
[**    ] A stop job is running for Docker Ap…n Container Engine (6s / 1min 24s)
	2021-08-17T00:15:04.428470200Z  M
[  OK  ] Unmounted /var/lib/docker/…6c9d50df4f7ca1e03a3c3e/merged.
	2021-08-17T00:15:04.594418200Z  [  OK  ] Stopped Docker Application Container Engine.
	2021-08-17T00:15:04.595705200Z  [  OK  ] Stopped target Network is Online.
	2021-08-17T00:15:04.596294100Z           Stopping containerd container runtime...
	2021-08-17T00:15:04.612847000Z  [  OK  ] Stopped minikube automount.
	2021-08-17T00:15:04.724610100Z  [  OK  ] Stopped containerd container runtime.
	2021-08-17T00:15:04.724990400Z  [  OK  ] Stopped target Basic System.
	2021-08-17T00:15:04.725528300Z  [  OK  ] Stopped target Paths.
	2021-08-17T00:15:04.725757700Z  [  OK  ] Stopped target Slices.
	2021-08-17T00:15:04.725771200Z  [  OK  ] Stopped target Sockets.
	2021-08-17T00:15:04.727308000Z  [  OK  ] Closed D-Bus System Message Bus Socket.
	2021-08-17T00:15:04.728278400Z  [  OK  ] Closed Docker Socket for the API.
	2021-08-17T00:15:04.729466300Z  [  OK  ] Closed Podman API Socket.
	2021-08-17T00:15:04.729484700Z  [  OK  ] Stopped target System Initialization.
	2021-08-17T00:15:04.729491300Z  [  OK  ] Stopped target Local Encrypted Volumes.
	2021-08-17T00:15:04.751858900Z  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
	2021-08-17T00:15:04.752502000Z  [  OK  ] Stopped target Local File Systems.
	2021-08-17T00:15:04.754603800Z           Unmounting /data...
	2021-08-17T00:15:04.762752800Z           Unmounting /etc/hostname...
	2021-08-17T00:15:04.765363000Z           Unmounting /etc/hosts...
	2021-08-17T00:15:04.782567900Z           Unmounting /etc/resolv.conf...
	2021-08-17T00:15:04.786597000Z           Unmounting /kind/product_uuid...
	2021-08-17T00:15:04.809592600Z           Unmounting /run/docker/netns/default...
	2021-08-17T00:15:04.811643500Z           Unmounting /tmp/hostpath-provisioner...
	2021-08-17T00:15:04.819730400Z           Unmounting /tmp/hostpath_pv...
	2021-08-17T00:15:04.831370000Z           Unmounting /usr/lib/modules...
	2021-08-17T00:15:04.835681500Z  [  OK  ] Stopped Apply Kernel Variables.
	2021-08-17T00:15:04.838422200Z           Stopping Update UTMP about System Boot/Shutdown...
	2021-08-17T00:15:04.873429100Z  [  OK  ] Unmounted /data.
	2021-08-17T00:15:04.883491300Z  [  OK  ] Unmounted /etc/hosts.
	2021-08-17T00:15:04.889932400Z  [  OK  ] Unmounted /etc/resolv.conf.
	2021-08-17T00:15:04.900479100Z  [  OK  ] Unmounted /tmp/hostpath_pv.
	2021-08-17T00:15:04.912907100Z  [  OK  ] Stopped Update UTMP about System Boot/Shutdown.
	2021-08-17T00:15:04.917671700Z           Unmounting /var...
	2021-08-17T00:15:04.982240100Z  [  OK  ] Unmounted /etc/hostname.
	2021-08-17T00:15:04.984673000Z  [  OK  ] Unmounted /kind/product_uuid.
	2021-08-17T00:15:04.987450500Z  [  OK  ] Unmounted /run/docker/netns/default.
	2021-08-17T00:15:04.992757200Z  [  OK  ] Unmounted /tmp/hostpath-provisioner.
	2021-08-17T00:15:04.997335400Z  [  OK  ] Unmounted /usr/lib/modules.
	2021-08-17T00:15:05.000172400Z  [  OK  ] Unmounted /var.
	2021-08-17T00:15:05.000873700Z           Unmounting /tmp...
	2021-08-17T00:15:05.093323100Z  [  OK  ] Unmounted /tmp.
	2021-08-17T00:15:05.094269600Z  [  OK  ] Stopped target Local File Systems (Pre).
	2021-08-17T00:15:05.094292700Z  [  OK  ] Stopped target Swap.
	2021-08-17T00:15:05.094298800Z  [  OK  ] Reached target Unmount All Filesystems.
	2021-08-17T00:15:05.098706600Z  [  OK  ] Stopped Create Static Device Nodes in /dev.
	2021-08-17T00:15:05.100442100Z  [  OK  ] Stopped Create System Users.
	2021-08-17T00:15:05.101848700Z  [  OK  ] Stopped Remount Root and Kernel File Systems.
	2021-08-17T00:15:05.101870300Z  [  OK  ] Reached target Shutdown.
	2021-08-17T00:15:05.101878100Z  [  OK  ] Reached target Final Step.
	2021-08-17T00:15:05.128799300Z           Starting Halt...
	2021-08-17T00:15:05.140314800Z  [  OK  ] Finished Power-Off.
	2021-08-17T00:15:05.141855300Z  [  OK  ] Reached target Power-Off.
	
	-- /stdout --
	I0817 00:15:21.749517  106960 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:15:22.623705  106960 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:15:22.2317187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:15:22.623705  106960 errors.go:98] postmortem docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:15:22.2317187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:15:22.631800  106960 network_create.go:255] running [docker network inspect kubernetes-upgrade-20210817001119-111344] to gather additional debugging logs...
	I0817 00:15:22.631800  106960 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210817001119-111344
	W0817 00:15:23.183930  106960 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	I0817 00:15:23.184191  106960 network_create.go:258] error running [docker network inspect kubernetes-upgrade-20210817001119-111344]: docker network inspect kubernetes-upgrade-20210817001119-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20210817001119-111344
	I0817 00:15:23.184393  106960 network_create.go:260] output of [docker network inspect kubernetes-upgrade-20210817001119-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20210817001119-111344
	
	** /stderr **
	I0817 00:15:23.191403  106960 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:15:24.037707  106960 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:15:23.6414014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:15:24.049210  106960 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210817001119-111344
	I0817 00:15:24.573588  106960 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\kubernetes-upgrade-20210817001119-111344\config.json ...
	I0817 00:15:24.574608  106960 machine.go:88] provisioning docker machine ...
	I0817 00:15:24.574608  106960 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20210817001119-111344"
	I0817 00:15:24.581442  106960 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344
	W0817 00:15:25.107403  106960 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	I0817 00:15:25.107829  106960 machine.go:91] provisioned docker machine in 533.2007ms
	I0817 00:15:25.117516  106960 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:15:25.123616  106960 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344
	W0817 00:15:25.656641  106960 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	I0817 00:15:25.657053  106960 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:15:25.940108  106960 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344
	W0817 00:15:26.424710  106960 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	I0817 00:15:26.425331  106960 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:15:26.977804  106960 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344
	W0817 00:15:27.501910  106960 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	W0817 00:15:27.502535  106960 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0817 00:15:27.502674  106960 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:15:27.502674  106960 fix.go:57] fixHost completed within 8.550976s
	I0817 00:15:27.502674  106960 start.go:80] releasing machines lock for "kubernetes-upgrade-20210817001119-111344", held for 8.550976s
	W0817 00:15:27.502913  106960 start.go:521] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0817 00:15:27.503317  106960 out.go:242] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:15:27.503317  106960 start.go:536] Will try again in 5 seconds ...
	I0817 00:15:32.503792  106960 start.go:313] acquiring machines lock for kubernetes-upgrade-20210817001119-111344: {Name:mkd9283fa2f1b3b972171104bec0702df88f1fb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:15:32.504356  106960 start.go:317] acquired machines lock for "kubernetes-upgrade-20210817001119-111344" in 263.1µs
	I0817 00:15:32.504492  106960 start.go:93] Skipping create...Using existing machine configuration
	I0817 00:15:32.504492  106960 fix.go:55] fixHost starting: 
	I0817 00:15:32.516322  106960 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210817001119-111344 --format={{.State.Status}}
	I0817 00:15:33.007520  106960 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210817001119-111344: state=Stopped err=<nil>
	W0817 00:15:33.007520  106960 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 00:15:33.013357  106960 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20210817001119-111344" ...
	I0817 00:15:33.015270  106960 cli_runner.go:115] Run: docker start kubernetes-upgrade-20210817001119-111344
	W0817 00:15:33.584657  106960 cli_runner.go:162] docker start kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	I0817 00:15:33.590576  106960 cli_runner.go:115] Run: docker inspect kubernetes-upgrade-20210817001119-111344
	I0817 00:15:34.086571  106960 errors.go:84] Postmortem inspect ("docker inspect kubernetes-upgrade-20210817001119-111344"): -- stdout --
	[
	    {
	        "Id": "2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da",
	        "Created": "2021-08-17T00:11:33.2667132Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network 76fe7f8c8a06cbce45c22be0496564774e7468502c41c92f5937f12e17cdef08 not found",
	            "StartedAt": "2021-08-17T00:11:35.924797Z",
	            "FinishedAt": "2021-08-17T00:15:05.3732415Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/hosts",
	        "LogPath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da-json.log",
	        "Name": "/kubernetes-upgrade-20210817001119-111344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20210817001119-111344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20210817001119-111344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53-init/diff:/var/lib/docker/overlay2/e167e57d4b442602b2435f5ffd2147b1da53de34df49d96ce69565867fcf3850/diff:/var/lib/docker/overlay2/dbfef15a73962254d5bcc2c91a409021fc3573c3135096621d707c6f4feaac7d/diff:/var/lib/docker/overlay2/7fc44848dc580276135d9db2b62ce047cfba1909de5e91acbe8c1a5fc8fb3649/diff:/var/lib/docker/overlay2/493996ff2d6a75ef70db2749dded6936397fe536c32e28dda979b8af93e19f13/diff:/var/lib/docker/overlay2/b862553905dec6f42a41351a012fdce386251d97160f74f6b1feb3b455e1f53a/diff:/var/lib/docker/overlay2/517a8b2830d9e81ff950c8305063a6681219abbb7b22f3a87587fa819a0728ed/diff:/var/lib/docker/overlay2/f2b268080cfd9bbb64731ea6b7cb2ec64077e6c2701c2ab6e8b358a541056c5d/diff:/var/lib/docker/overlay2/ee5e612696333c681900cad605a1f678e9114e9c7ecf70717fad21aea1e52992/diff:/var/lib/docker/overlay2/6f44289af0b09a02645c237aabeff61487c57040b9531c0f7bd97517308bfd57/diff:/var/lib/docker/overlay2/f98f67
21a411bacf9d310d4d4405fbd528fa90d60af5ffabda9d55cef9ef3033/diff:/var/lib/docker/overlay2/8bc2e0f6b7c2aeccc6a944f316dbac5672f8685cc5dd5d3c2fc4bd370db4949f/diff:/var/lib/docker/overlay2/ef9e793c1e243004ff088f210369994837eb19a8abd21cf93f75257155445f16/diff:/var/lib/docker/overlay2/48fa7f37fc37f8220a31f4294bc800ef7a33c53c10bdc23d7dc68f27cfe4e535/diff:/var/lib/docker/overlay2/54bc5e0e0c32fdc66ce3eeb345721201a63a0c878d4665607246cd4aa5af61e5/diff:/var/lib/docker/overlay2/398c3fc63254fcc564086ced0eb7211f2d474f8bbdcd43ee27fd609e767c44a6/diff:/var/lib/docker/overlay2/796acb5b93384da004a8065a332cbb07c952569bdd7bb5e551b218e4c5c61f73/diff:/var/lib/docker/overlay2/d90baef87ad95bdfb14a2f35e4cb62336e18c21eb934266f43bfbe017252b857/diff:/var/lib/docker/overlay2/c16752decc8ef06fce4eebdf4ff4725414f3aa80cccd7b3ffdc325095930c0b4/diff:/var/lib/docker/overlay2/a679084eec181b0e1408e573d1ac08c47af1fd8266eb5884bf1a38d5ba0ddbbc/diff:/var/lib/docker/overlay2/15becb79b0d40211562ae33ddc5ec776276b9ae42c8a9f4645dcc6442b36f771/diff:/var/lib/d
ocker/overlay2/068a9a5dce1094eb72788237bd9cda4c76345774d5e647f0af81302a75861f4a/diff:/var/lib/docker/overlay2/74b9e9d807e09734ee96c76bc67adc56c9e3286b39315f89f6747c8c917ad2e5/diff:/var/lib/docker/overlay2/75de8e4895a0b4efe563705c06184db384b5c40154856b9bca2106a8d59fc151/diff:/var/lib/docker/overlay2/cbca3c40b21fee2ef276744168492f17203934aca8de4b459edae2fa55b6bb02/diff:/var/lib/docker/overlay2/584d28a6308bb998bd89d7d92c45b57b9dd66de472d166972d2f5195afd9dd44/diff:/var/lib/docker/overlay2/9c722118749c036eb2d00ba5a6925c5f32b121d64974c99e2de552b26a8bb7cd/diff:/var/lib/docker/overlay2/24908c792743f57c182587c66263f074ed86ae7c5812c631dea82d8ec6650e81/diff:/var/lib/docker/overlay2/9a8de59bfb816b3fc2f0fd522ef966196534483b5e87aafd180dd8b07e9c3582/diff:/var/lib/docker/overlay2/df46d170084213da519dea7e0f402d51272dc10df4d7cd7f37c528c411ac7000/diff:/var/lib/docker/overlay2/36b86a6f515e5882426e598755bb77d43cc340fd20798dfd0a810cd2ab96eeb6/diff:/var/lib/docker/overlay2/b54ac02f70047359cd143a32f862d18498cb556877ccfd252defb9d17fc
9d9f5/diff:/var/lib/docker/overlay2/971b77d080920997e1d0d0936f312a9a322ccd6ab9920c83a8eb5d14b93c3849/diff:/var/lib/docker/overlay2/5b5c21ae360c7e0738c0048bc3fe8d7d3cc0640d266660121f3968f675f42063/diff:/var/lib/docker/overlay2/e07bf2561a99ba47435b8f84b267268e02e9e4ff47832bd5054ee28bb1ca5001/diff:/var/lib/docker/overlay2/0c560be48f01814af21ec54fc79ea5e8db28f05e967a17b331be28ad61c75483/diff:/var/lib/docker/overlay2/27930667f3fd0fd38c13a39c0590c03a2c3b3ba04f0a3c946167be6a40f50c46/diff",
	                "MergedDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53/merged",
	                "UpperDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53/diff",
	                "WorkDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20210817001119-111344",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20210817001119-111344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20210817001119-111344",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20210817001119-111344",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20210817001119-111344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e7fda3ade8792c0aba0069fda9edea4aa344e0c9943b3ec20c40c66d95a42b9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/5e7fda3ade87",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20210817001119-111344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2c252d82645a",
	                        "kubernetes-upgrade-20210817001119-111344"
	                    ],
	                    "NetworkID": "76fe7f8c8a06cbce45c22be0496564774e7468502c41c92f5937f12e17cdef08",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0817 00:15:34.092846  106960 cli_runner.go:115] Run: docker logs --timestamps --details kubernetes-upgrade-20210817001119-111344
	I0817 00:15:34.618175  106960 errors.go:91] Postmortem logs ("docker logs --timestamps --details kubernetes-upgrade-20210817001119-111344"): -- stdout --
	2021-08-17T00:11:35.908761000Z  + configure_containerd
	2021-08-17T00:11:35.908805200Z  ++ stat -f -c %T /kind
	2021-08-17T00:11:35.913670800Z  + [[ overlayfs == \z\f\s ]]
	2021-08-17T00:11:35.917897300Z  + configure_proxy
	2021-08-17T00:11:35.917918100Z  + mkdir -p /etc/systemd/system.conf.d/
	2021-08-17T00:11:35.917923900Z  + [[ ! -z '' ]]
	2021-08-17T00:11:35.917928600Z  + cat
	2021-08-17T00:11:35.924368600Z  + fix_kmsg
	2021-08-17T00:11:35.925780100Z  + [[ ! -e /dev/kmsg ]]
	2021-08-17T00:11:35.926379900Z  + fix_mount
	2021-08-17T00:11:35.929989100Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2021-08-17T00:11:35.931605600Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2021-08-17T00:11:35.932555900Z  ++ which mount
	2021-08-17T00:11:35.937233900Z  ++ which umount
	2021-08-17T00:11:35.941130600Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2021-08-17T00:11:35.990840200Z  ++ which mount
	2021-08-17T00:11:35.994161900Z  ++ which umount
	2021-08-17T00:11:36.010401600Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2021-08-17T00:11:36.016596000Z  +++ which mount
	2021-08-17T00:11:36.021522700Z  ++ stat -f -c %T /usr/bin/mount
	2021-08-17T00:11:36.034160300Z  + [[ overlayfs == \a\u\f\s ]]
	2021-08-17T00:11:36.034178900Z  + echo 'INFO: remounting /sys read-only'
	2021-08-17T00:11:36.034184100Z  INFO: remounting /sys read-only
	2021-08-17T00:11:36.034189200Z  + mount -o remount,ro /sys
	2021-08-17T00:11:36.036163400Z  + echo 'INFO: making mounts shared'
	2021-08-17T00:11:36.036552500Z  INFO: making mounts shared
	2021-08-17T00:11:36.039947000Z  + mount --make-rshared /
	2021-08-17T00:11:36.045381500Z  + retryable_fix_cgroup
	2021-08-17T00:11:36.052099000Z  ++ seq 0 10
	2021-08-17T00:11:36.058791200Z  + for i in $(seq 0 10)
	2021-08-17T00:11:36.058810100Z  + fix_cgroup
	2021-08-17T00:11:36.058815400Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2021-08-17T00:11:36.058819900Z  + echo 'INFO: detected cgroup v1'
	2021-08-17T00:11:36.058824300Z  INFO: detected cgroup v1
	2021-08-17T00:11:36.058828800Z  + echo 'INFO: fix cgroup mounts for all subsystems'
	2021-08-17T00:11:36.058833300Z  INFO: fix cgroup mounts for all subsystems
	2021-08-17T00:11:36.058837600Z  + local current_cgroup
	2021-08-17T00:11:36.059089500Z  ++ cut -d: -f3
	2021-08-17T00:11:36.062062200Z  ++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
	2021-08-17T00:11:36.080013000Z  + current_cgroup=/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.080036200Z  + local cgroup_subsystems
	2021-08-17T00:11:36.083395500Z  ++ findmnt -lun -o source,target -t cgroup
	2021-08-17T00:11:36.083542000Z  ++ grep /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.086238200Z  ++ awk '{print $2}'
	2021-08-17T00:11:36.099551900Z  + cgroup_subsystems='/sys/fs/cgroup/cpuset
	2021-08-17T00:11:36.099562900Z  /sys/fs/cgroup/cpu
	2021-08-17T00:11:36.099568000Z  /sys/fs/cgroup/cpuacct
	2021-08-17T00:11:36.099572500Z  /sys/fs/cgroup/blkio
	2021-08-17T00:11:36.099578000Z  /sys/fs/cgroup/memory
	2021-08-17T00:11:36.099582300Z  /sys/fs/cgroup/devices
	2021-08-17T00:11:36.099586500Z  /sys/fs/cgroup/freezer
	2021-08-17T00:11:36.099591000Z  /sys/fs/cgroup/net_cls
	2021-08-17T00:11:36.099595500Z  /sys/fs/cgroup/perf_event
	2021-08-17T00:11:36.099599900Z  /sys/fs/cgroup/net_prio
	2021-08-17T00:11:36.099604200Z  /sys/fs/cgroup/hugetlb
	2021-08-17T00:11:36.099608400Z  /sys/fs/cgroup/pids
	2021-08-17T00:11:36.099612700Z  /sys/fs/cgroup/systemd'
	2021-08-17T00:11:36.099618000Z  + local cgroup_mounts
	2021-08-17T00:11:36.106396600Z  ++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
	2021-08-17T00:11:36.118124900Z  + cgroup_mounts='/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:173 master:18 - cgroup
	2021-08-17T00:11:36.118141100Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:174 master:19 - cgroup
	2021-08-17T00:11:36.118148100Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:175 master:20 - cgroup
	2021-08-17T00:11:36.118522700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:176 master:21 - cgroup
	2021-08-17T00:11:36.118530500Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:177 master:22 - cgroup
	2021-08-17T00:11:36.118535700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:178 master:23 - cgroup
	2021-08-17T00:11:36.118540700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:186 master:24 - cgroup
	2021-08-17T00:11:36.118546000Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:187 master:25 - cgroup
	2021-08-17T00:11:36.118551200Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:188 master:26 - cgroup
	2021-08-17T00:11:36.118556300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:189 master:27 - cgroup
	2021-08-17T00:11:36.118857500Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:190 master:28 - cgroup
	2021-08-17T00:11:36.118869000Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:191 master:29 - cgroup
	2021-08-17T00:11:36.118874900Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:193 master:31 - cgroup cgroup'
	2021-08-17T00:11:36.118995600Z  + [[ -n /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:173 master:18 - cgroup
	2021-08-17T00:11:36.119008500Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:174 master:19 - cgroup
	2021-08-17T00:11:36.119014200Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:175 master:20 - cgroup
	2021-08-17T00:11:36.119019300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:176 master:21 - cgroup
	2021-08-17T00:11:36.119024200Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:177 master:22 - cgroup
	2021-08-17T00:11:36.119029300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:178 master:23 - cgroup
	2021-08-17T00:11:36.119034400Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:186 master:24 - cgroup
	2021-08-17T00:11:36.119039500Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:187 master:25 - cgroup
	2021-08-17T00:11:36.119044900Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:188 master:26 - cgroup
	2021-08-17T00:11:36.119049700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:189 master:27 - cgroup
	2021-08-17T00:11:36.119054500Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:190 master:28 - cgroup
	2021-08-17T00:11:36.119059700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:191 master:29 - cgroup
	2021-08-17T00:11:36.119064600Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:193 master:31 - cgroup cgroup ]]
	2021-08-17T00:11:36.119069700Z  + local mount_root
	2021-08-17T00:11:36.119587000Z  ++ head -n 1
	2021-08-17T00:11:36.125014500Z  ++ cut '-d ' -f1
	2021-08-17T00:11:36.143436900Z  + mount_root=/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.149817100Z  ++ cut '-d ' -f 2
	2021-08-17T00:11:36.149837800Z  ++ echo '/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:173 master:18 - cgroup
	2021-08-17T00:11:36.149844700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:174 master:19 - cgroup
	2021-08-17T00:11:36.149849800Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:175 master:20 - cgroup
	2021-08-17T00:11:36.149855000Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:176 master:21 - cgroup
	2021-08-17T00:11:36.149860300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:177 master:22 - cgroup
	2021-08-17T00:11:36.149865300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:178 master:23 - cgroup
	2021-08-17T00:11:36.149870100Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:186 master:24 - cgroup
	2021-08-17T00:11:36.149875100Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:187 master:25 - cgroup
	2021-08-17T00:11:36.149879900Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:188 master:26 - cgroup
	2021-08-17T00:11:36.149884300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:189 master:27 - cgroup
	2021-08-17T00:11:36.149888700Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:190 master:28 - cgroup
	2021-08-17T00:11:36.149893200Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:191 master:29 - cgroup
	2021-08-17T00:11:36.149898300Z  /docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:193 master:31 - cgroup cgroup'
	2021-08-17T00:11:36.156907800Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.156931500Z  + local target=/sys/fs/cgroup/cpuset/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.156938000Z  + findmnt /sys/fs/cgroup/cpuset/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.178971500Z  + mkdir -p /sys/fs/cgroup/cpuset/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.189082000Z  + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.201372500Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.201404200Z  + local target=/sys/fs/cgroup/cpu/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.201421500Z  + findmnt /sys/fs/cgroup/cpu/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.210825800Z  + mkdir -p /sys/fs/cgroup/cpu/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.218168800Z  + mount --bind /sys/fs/cgroup/cpu /sys/fs/cgroup/cpu/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.225229500Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.225790900Z  + local target=/sys/fs/cgroup/cpuacct/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.225804700Z  + findmnt /sys/fs/cgroup/cpuacct/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.234489700Z  + mkdir -p /sys/fs/cgroup/cpuacct/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.234514200Z  + mount --bind /sys/fs/cgroup/cpuacct /sys/fs/cgroup/cpuacct/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.243090000Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.243103000Z  + local target=/sys/fs/cgroup/blkio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.243108700Z  + findmnt /sys/fs/cgroup/blkio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.246084300Z  + mkdir -p /sys/fs/cgroup/blkio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.257379900Z  + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.272139900Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.272167000Z  + local target=/sys/fs/cgroup/memory/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.272173300Z  + findmnt /sys/fs/cgroup/memory/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.285358000Z  + mkdir -p /sys/fs/cgroup/memory/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.292104600Z  + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.297044800Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.297073600Z  + local target=/sys/fs/cgroup/devices/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.297080100Z  + findmnt /sys/fs/cgroup/devices/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.313235900Z  + mkdir -p /sys/fs/cgroup/devices/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.314905200Z  + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.323774100Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.323793500Z  + local target=/sys/fs/cgroup/freezer/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.323799600Z  + findmnt /sys/fs/cgroup/freezer/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.333168500Z  + mkdir -p /sys/fs/cgroup/freezer/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.337965600Z  + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.346097700Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.346120400Z  + local target=/sys/fs/cgroup/net_cls/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.346127000Z  + findmnt /sys/fs/cgroup/net_cls/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.357361100Z  + mkdir -p /sys/fs/cgroup/net_cls/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.361175800Z  + mount --bind /sys/fs/cgroup/net_cls /sys/fs/cgroup/net_cls/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.373981600Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.374008200Z  + local target=/sys/fs/cgroup/perf_event/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.374014300Z  + findmnt /sys/fs/cgroup/perf_event/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.379072400Z  + mkdir -p /sys/fs/cgroup/perf_event/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.388236200Z  + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.394694400Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.394718500Z  + local target=/sys/fs/cgroup/net_prio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.394724900Z  + findmnt /sys/fs/cgroup/net_prio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.399036600Z  + mkdir -p /sys/fs/cgroup/net_prio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.404697900Z  + mount --bind /sys/fs/cgroup/net_prio /sys/fs/cgroup/net_prio/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.407863900Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.408816800Z  + local target=/sys/fs/cgroup/hugetlb/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.409635600Z  + findmnt /sys/fs/cgroup/hugetlb/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.415031700Z  + mkdir -p /sys/fs/cgroup/hugetlb/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.422071600Z  + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.427948100Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.428591900Z  + local target=/sys/fs/cgroup/pids/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.428802900Z  + findmnt /sys/fs/cgroup/pids/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.437546700Z  + mkdir -p /sys/fs/cgroup/pids/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.445784600Z  + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.456576600Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2021-08-17T00:11:36.456600500Z  + local target=/sys/fs/cgroup/systemd/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.456709100Z  + findmnt /sys/fs/cgroup/systemd/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.464618100Z  + mkdir -p /sys/fs/cgroup/systemd/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.477032900Z  + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da
	2021-08-17T00:11:36.484648600Z  + mount --make-rprivate /sys/fs/cgroup
	2021-08-17T00:11:36.491582100Z  + echo '/sys/fs/cgroup/cpuset
	2021-08-17T00:11:36.491603300Z  /sys/fs/cgroup/cpu
	2021-08-17T00:11:36.491608900Z  /sys/fs/cgroup/cpuacct
	2021-08-17T00:11:36.491614300Z  /sys/fs/cgroup/blkio
	2021-08-17T00:11:36.491618600Z  /sys/fs/cgroup/memory
	2021-08-17T00:11:36.491623200Z  /sys/fs/cgroup/devices
	2021-08-17T00:11:36.491627800Z  /sys/fs/cgroup/freezer
	2021-08-17T00:11:36.491739600Z  /sys/fs/cgroup/net_cls
	2021-08-17T00:11:36.491748200Z  /sys/fs/cgroup/perf_event
	2021-08-17T00:11:36.491753000Z  /sys/fs/cgroup/net_prio
	2021-08-17T00:11:36.491757500Z  /sys/fs/cgroup/hugetlb
	2021-08-17T00:11:36.491762000Z  /sys/fs/cgroup/pids
	2021-08-17T00:11:36.491766400Z  /sys/fs/cgroup/systemd'
	2021-08-17T00:11:36.491771100Z  + IFS=
	2021-08-17T00:11:36.491775300Z  + read -r subsystem
	2021-08-17T00:11:36.493534900Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
	2021-08-17T00:11:36.493555600Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.493561600Z  + local subsystem=/sys/fs/cgroup/cpuset
	2021-08-17T00:11:36.493566300Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.493571200Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet
	2021-08-17T00:11:36.501090600Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.501653100Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2021-08-17T00:11:36.507497000Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2021-08-17T00:11:36.512691000Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
	2021-08-17T00:11:36.521355300Z  + IFS=
	2021-08-17T00:11:36.521375600Z  + read -r subsystem
	2021-08-17T00:11:36.521381200Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu
	2021-08-17T00:11:36.521386300Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.521391100Z  + local subsystem=/sys/fs/cgroup/cpu
	2021-08-17T00:11:36.521395600Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.521400100Z  + mkdir -p /sys/fs/cgroup/cpu//kubelet
	2021-08-17T00:11:36.541636000Z  + '[' /sys/fs/cgroup/cpu == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.541663000Z  + mount --bind /sys/fs/cgroup/cpu//kubelet /sys/fs/cgroup/cpu//kubelet
	2021-08-17T00:11:36.547185100Z  + IFS=
	2021-08-17T00:11:36.547207400Z  + read -r subsystem
	2021-08-17T00:11:36.547213200Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuacct
	2021-08-17T00:11:36.547217900Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.547222700Z  + local subsystem=/sys/fs/cgroup/cpuacct
	2021-08-17T00:11:36.547227400Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.547232000Z  + mkdir -p /sys/fs/cgroup/cpuacct//kubelet
	2021-08-17T00:11:36.558099600Z  + '[' /sys/fs/cgroup/cpuacct == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.558126700Z  + mount --bind /sys/fs/cgroup/cpuacct//kubelet /sys/fs/cgroup/cpuacct//kubelet
	2021-08-17T00:11:36.562708700Z  + IFS=
	2021-08-17T00:11:36.562729400Z  + read -r subsystem
	2021-08-17T00:11:36.562734800Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
	2021-08-17T00:11:36.562740200Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.562744700Z  + local subsystem=/sys/fs/cgroup/blkio
	2021-08-17T00:11:36.562749800Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.562754900Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet
	2021-08-17T00:11:36.568378900Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.568400500Z  + mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
	2021-08-17T00:11:36.578231400Z  + IFS=
	2021-08-17T00:11:36.578364100Z  + read -r subsystem
	2021-08-17T00:11:36.578372600Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
	2021-08-17T00:11:36.578377700Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.578382300Z  + local subsystem=/sys/fs/cgroup/memory
	2021-08-17T00:11:36.578386900Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.578391400Z  + mkdir -p /sys/fs/cgroup/memory//kubelet
	2021-08-17T00:11:36.582830200Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.582854100Z  + mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
	2021-08-17T00:11:36.594874600Z  + IFS=
	2021-08-17T00:11:36.594896500Z  + read -r subsystem
	2021-08-17T00:11:36.594902400Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
	2021-08-17T00:11:36.594907600Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.594912100Z  + local subsystem=/sys/fs/cgroup/devices
	2021-08-17T00:11:36.594916600Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.594921100Z  + mkdir -p /sys/fs/cgroup/devices//kubelet
	2021-08-17T00:11:36.602910300Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.602934200Z  + mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
	2021-08-17T00:11:36.618866100Z  + IFS=
	2021-08-17T00:11:36.621146200Z  + read -r subsystem
	2021-08-17T00:11:36.621690800Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
	2021-08-17T00:11:36.623465100Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.623484400Z  + local subsystem=/sys/fs/cgroup/freezer
	2021-08-17T00:11:36.623490300Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.623495100Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet
	2021-08-17T00:11:36.627221500Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.627240900Z  + mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
	2021-08-17T00:11:36.657428200Z  + IFS=
	2021-08-17T00:11:36.657457200Z  + read -r subsystem
	2021-08-17T00:11:36.657463500Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls
	2021-08-17T00:11:36.657468600Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.657473200Z  + local subsystem=/sys/fs/cgroup/net_cls
	2021-08-17T00:11:36.657477900Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.658021400Z  + mkdir -p /sys/fs/cgroup/net_cls//kubelet
	2021-08-17T00:11:36.661064000Z  + '[' /sys/fs/cgroup/net_cls == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.661082500Z  + mount --bind /sys/fs/cgroup/net_cls//kubelet /sys/fs/cgroup/net_cls//kubelet
	2021-08-17T00:11:36.668615200Z  + IFS=
	2021-08-17T00:11:36.668637300Z  + read -r subsystem
	2021-08-17T00:11:36.668643300Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
	2021-08-17T00:11:36.668648600Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.668662400Z  + local subsystem=/sys/fs/cgroup/perf_event
	2021-08-17T00:11:36.668667700Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.668672100Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet
	2021-08-17T00:11:36.679080300Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.683528800Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
	2021-08-17T00:11:36.702910600Z  + IFS=
	2021-08-17T00:11:36.702940100Z  + read -r subsystem
	2021-08-17T00:11:36.702946000Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_prio
	2021-08-17T00:11:36.702951000Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.702955600Z  + local subsystem=/sys/fs/cgroup/net_prio
	2021-08-17T00:11:36.702960400Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.702964900Z  + mkdir -p /sys/fs/cgroup/net_prio//kubelet
	2021-08-17T00:11:36.709417300Z  + '[' /sys/fs/cgroup/net_prio == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.710764000Z  + mount --bind /sys/fs/cgroup/net_prio//kubelet /sys/fs/cgroup/net_prio//kubelet
	2021-08-17T00:11:36.722862100Z  + IFS=
	2021-08-17T00:11:36.723120200Z  + read -r subsystem
	2021-08-17T00:11:36.723130700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
	2021-08-17T00:11:36.723135800Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.723140300Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2021-08-17T00:11:36.723145000Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.723149500Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet
	2021-08-17T00:11:36.733738700Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.737586200Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
	2021-08-17T00:11:36.737607200Z  + IFS=
	2021-08-17T00:11:36.737613000Z  + read -r subsystem
	2021-08-17T00:11:36.737618500Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
	2021-08-17T00:11:36.737623500Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.737628300Z  + local subsystem=/sys/fs/cgroup/pids
	2021-08-17T00:11:36.737633200Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.737637600Z  + mkdir -p /sys/fs/cgroup/pids//kubelet
	2021-08-17T00:11:36.737650300Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.737656200Z  + mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
	2021-08-17T00:11:36.745410900Z  + IFS=
	2021-08-17T00:11:36.745440600Z  + read -r subsystem
	2021-08-17T00:11:36.745448000Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd
	2021-08-17T00:11:36.745613500Z  + local cgroup_root=/kubelet
	2021-08-17T00:11:36.745626700Z  + local subsystem=/sys/fs/cgroup/systemd
	2021-08-17T00:11:36.745632500Z  + '[' -z /kubelet ']'
	2021-08-17T00:11:36.745638400Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet
	2021-08-17T00:11:36.750871300Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2021-08-17T00:11:36.750895500Z  + mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet
	2021-08-17T00:11:36.764528400Z  + IFS=
	2021-08-17T00:11:36.764559500Z  + read -r subsystem
	2021-08-17T00:11:36.766332200Z  + return
	2021-08-17T00:11:36.766354900Z  + fix_machine_id
	2021-08-17T00:11:36.766361200Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2021-08-17T00:11:36.766366000Z  INFO: clearing and regenerating /etc/machine-id
	2021-08-17T00:11:36.766371000Z  + rm -f /etc/machine-id
	2021-08-17T00:11:36.768782600Z  + systemd-machine-id-setup
	2021-08-17T00:11:36.786567700Z  Initializing machine ID from D-Bus machine ID.
	2021-08-17T00:11:36.857614700Z  + fix_product_name
	2021-08-17T00:11:36.858089900Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2021-08-17T00:11:36.859058000Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2021-08-17T00:11:36.859075700Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2021-08-17T00:11:36.859081500Z  + echo kind
	2021-08-17T00:11:36.860211300Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2021-08-17T00:11:36.866057300Z  + fix_product_uuid
	2021-08-17T00:11:36.866079400Z  + [[ ! -f /kind/product_uuid ]]
	2021-08-17T00:11:36.866087300Z  + cat /proc/sys/kernel/random/uuid
	2021-08-17T00:11:36.876462100Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2021-08-17T00:11:36.876969900Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2021-08-17T00:11:36.877210200Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2021-08-17T00:11:36.879453000Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2021-08-17T00:11:36.886982100Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2021-08-17T00:11:36.888582400Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2021-08-17T00:11:36.888821600Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2021-08-17T00:11:36.888839000Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2021-08-17T00:11:36.894152700Z  + select_iptables
	2021-08-17T00:11:36.894575200Z  + local mode=nft
	2021-08-17T00:11:36.901100500Z  ++ grep '^-'
	2021-08-17T00:11:36.902450000Z  ++ wc -l
	2021-08-17T00:11:36.940975200Z  + num_legacy_lines=6
	2021-08-17T00:11:36.940998800Z  + '[' 6 -ge 10 ']'
	2021-08-17T00:11:36.953761800Z  ++ grep '^-'
	2021-08-17T00:11:36.954504000Z  ++ wc -l
	2021-08-17T00:11:36.990343000Z  ++ true
	2021-08-17T00:11:36.992024900Z  + num_nft_lines=0
	2021-08-17T00:11:36.992046100Z  + '[' 6 -ge 0 ']'
	2021-08-17T00:11:36.992051800Z  + mode=legacy
	2021-08-17T00:11:36.992056700Z  + echo 'INFO: setting iptables to detected mode: legacy'
	2021-08-17T00:11:36.992061600Z  INFO: setting iptables to detected mode: legacy
	2021-08-17T00:11:36.992077100Z  + update-alternatives --set iptables /usr/sbin/iptables-legacy
	2021-08-17T00:11:36.992082100Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
	2021-08-17T00:11:36.992086700Z  + local 'args=--set iptables /usr/sbin/iptables-legacy'
	2021-08-17T00:11:36.995845500Z  ++ seq 0 15
	2021-08-17T00:11:37.017240700Z  + for i in $(seq 0 15)
	2021-08-17T00:11:37.017792600Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
	2021-08-17T00:11:37.032853900Z  + return
	2021-08-17T00:11:37.033346400Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2021-08-17T00:11:37.036825500Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
	2021-08-17T00:11:37.038608400Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
	2021-08-17T00:11:37.040181400Z  ++ seq 0 15
	2021-08-17T00:11:37.045200000Z  + for i in $(seq 0 15)
	2021-08-17T00:11:37.046757300Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2021-08-17T00:11:37.064203500Z  + return
	2021-08-17T00:11:37.065062600Z  + enable_network_magic
	2021-08-17T00:11:37.065398900Z  + local docker_embedded_dns_ip=127.0.0.11
	2021-08-17T00:11:37.065618100Z  + local docker_host_ip
	2021-08-17T00:11:37.067453400Z  ++ cut '-d ' -f1
	2021-08-17T00:11:37.068831300Z  ++ head -n1 /dev/fd/63
	2021-08-17T00:11:37.072469700Z  +++ getent ahostsv4 host.docker.internal
	2021-08-17T00:11:37.110861200Z  + docker_host_ip=192.168.65.2
	2021-08-17T00:11:37.110891600Z  + [[ -z 192.168.65.2 ]]
	2021-08-17T00:11:37.110898100Z  + [[ 192.168.65.2 =~ ^127\.[0-9]+\.[0-9]+\.[0-9]+$ ]]
	2021-08-17T00:11:37.110903500Z  + iptables-restore
	2021-08-17T00:11:37.112176300Z  + iptables-save
	2021-08-17T00:11:37.123077100Z  + sed -e 's/-d 127.0.0.11/-d 192.168.65.2/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.65.2:53/g'
	2021-08-17T00:11:37.183064000Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2021-08-17T00:11:37.195216300Z  + sed -e s/127.0.0.11/192.168.65.2/g /etc/resolv.conf.original
	2021-08-17T00:11:37.215586600Z  ++ head -n1 /dev/fd/63
	2021-08-17T00:11:37.218734600Z  ++ cut '-d ' -f1
	2021-08-17T00:11:37.220367500Z  ++++ hostname
	2021-08-17T00:11:37.237765700Z  +++ getent ahostsv4 kubernetes-upgrade-20210817001119-111344
	2021-08-17T00:11:37.263163900Z  + curr_ipv4=192.168.67.2
	2021-08-17T00:11:37.270301000Z  + echo 'INFO: Detected IPv4 address: 192.168.67.2'
	2021-08-17T00:11:37.270323500Z  INFO: Detected IPv4 address: 192.168.67.2
	2021-08-17T00:11:37.270329300Z  + '[' -f /kind/old-ipv4 ']'
	2021-08-17T00:11:37.270366800Z  + [[ -n 192.168.67.2 ]]
	2021-08-17T00:11:37.270375600Z  + echo -n 192.168.67.2
	2021-08-17T00:11:37.277901800Z  ++ cut '-d ' -f1
	2021-08-17T00:11:37.278686700Z  ++ head -n1 /dev/fd/63
	2021-08-17T00:11:37.286903000Z  ++++ hostname
	2021-08-17T00:11:37.302063700Z  +++ getent ahostsv6 kubernetes-upgrade-20210817001119-111344
	2021-08-17T00:11:37.311130900Z  + curr_ipv6=
	2021-08-17T00:11:37.311154900Z  + echo 'INFO: Detected IPv6 address: '
	2021-08-17T00:11:37.311160900Z  INFO: Detected IPv6 address: 
	2021-08-17T00:11:37.312910700Z  + '[' -f /kind/old-ipv6 ']'
	2021-08-17T00:11:37.313342000Z  + [[ -n '' ]]
	2021-08-17T00:11:37.314897400Z  ++ uname -a
	2021-08-17T00:11:37.323158200Z  + echo 'entrypoint completed: Linux kubernetes-upgrade-20210817001119-111344 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux'
	2021-08-17T00:11:37.323319300Z  entrypoint completed: Linux kubernetes-upgrade-20210817001119-111344 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	2021-08-17T00:11:37.324639400Z  + exec /sbin/init
	2021-08-17T00:11:37.343374600Z  Failed to find module 'autofs4'
	2021-08-17T00:11:37.346156400Z  systemd 245.4-4ubuntu3.11 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
	2021-08-17T00:11:37.346179400Z  Detected virtualization docker.
	2021-08-17T00:11:37.346185200Z  Detected architecture x86-64.
	2021-08-17T00:11:37.347705700Z  Failed to create symlink /sys/fs/cgroup/net_prio: File exists
	2021-08-17T00:11:37.349642600Z  Failed to create symlink /sys/fs/cgroup/net_cls: File exists
	2021-08-17T00:11:37.354123300Z  Failed to create symlink /sys/fs/cgroup/cpuacct: File exists
	2021-08-17T00:11:37.354142500Z  Failed to create symlink /sys/fs/cgroup/cpu: File exists
	2021-08-17T00:11:37.356830300Z  
	2021-08-17T00:11:37.356963600Z  Welcome to Ubuntu 20.04.2 LTS!
	2021-08-17T00:11:37.356972300Z  
	2021-08-17T00:11:37.356977100Z  Set hostname to <kubernetes-upgrade-20210817001119-111344>.
	2021-08-17T00:11:37.713834300Z  [  OK  ] Started Dispatch Password …ts to Console Directory Watch.
	2021-08-17T00:11:37.713865900Z  [UNSUPP] Starting of Arbitrary Exec…Automount Point not supported.
	2021-08-17T00:11:37.713872400Z  [  OK  ] Reached target Local Encrypted Volumes.
	2021-08-17T00:11:37.713877600Z  [  OK  ] Reached target Network is Online.
	2021-08-17T00:11:37.713882300Z  [  OK  ] Reached target Paths.
	2021-08-17T00:11:37.713898200Z  [  OK  ] Reached target Slices.
	2021-08-17T00:11:37.713903700Z  [  OK  ] Reached target Swap.
	2021-08-17T00:11:37.717135500Z  [  OK  ] Listening on Journal Audit Socket.
	2021-08-17T00:11:37.717378800Z  [  OK  ] Listening on Journal Socket (/dev/log).
	2021-08-17T00:11:37.717399600Z  [  OK  ] Listening on Journal Socket.
	2021-08-17T00:11:37.728009100Z           Mounting Huge Pages File System...
	2021-08-17T00:11:37.750914900Z           Mounting Kernel Debug File System...
	2021-08-17T00:11:37.774184800Z           Mounting Kernel Trace File System...
	2021-08-17T00:11:37.806929000Z           Starting Journal Service...
	2021-08-17T00:11:37.859788400Z           Starting Create list of st…odes for the current kernel...
	2021-08-17T00:11:37.888594500Z           Mounting FUSE Control File System...
	2021-08-17T00:11:37.949831400Z           Starting Remount Root and Kernel File Systems...
	2021-08-17T00:11:37.989880900Z           Starting Apply Kernel Variables...
	2021-08-17T00:11:37.999456500Z  [  OK  ] Mounted Huge Pages File System.
	2021-08-17T00:11:37.999483700Z  [  OK  ] Mounted Kernel Debug File System.
	2021-08-17T00:11:38.000058400Z  [  OK  ] Mounted Kernel Trace File System.
	2021-08-17T00:11:38.000068000Z  [  OK  ] Finished Create list of st… nodes for the current kernel.
	2021-08-17T00:11:38.000073300Z  [  OK  ] Mounted FUSE Control File System.
	2021-08-17T00:11:38.049523800Z  [  OK  ] Finished Remount Root and Kernel File Systems.
	2021-08-17T00:11:38.071886600Z           Starting Create System Users...
	2021-08-17T00:11:38.081876000Z           Starting Update UTMP about System Boot/Shutdown...
	2021-08-17T00:11:38.154390500Z  [  OK  ] Finished Apply Kernel Variables.
	2021-08-17T00:11:38.185367800Z  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
	2021-08-17T00:11:38.188616700Z  [  OK  ] Started Journal Service.
	2021-08-17T00:11:38.199821300Z           Starting Flush Journal to Persistent Storage...
	2021-08-17T00:11:38.236726100Z  [  OK  ] Finished Flush Journal to Persistent Storage.
	2021-08-17T00:11:38.302172100Z  [  OK  ] Finished Create System Users.
	2021-08-17T00:11:38.322479600Z           Starting Create Static Device Nodes in /dev...
	2021-08-17T00:11:38.364212800Z  [  OK  ] Finished Create Static Device Nodes in /dev.
	2021-08-17T00:11:38.368553200Z  [  OK  ] Reached target Local File Systems (Pre).
	2021-08-17T00:11:38.368578100Z  [  OK  ] Reached target Local File Systems.
	2021-08-17T00:11:38.368584900Z  [  OK  ] Reached target System Initialization.
	2021-08-17T00:11:38.368590500Z  [  OK  ] Started Daily Cleanup of Temporary Directories.
	2021-08-17T00:11:38.368596300Z  [  OK  ] Reached target Timers.
	2021-08-17T00:11:38.368601600Z  [  OK  ] Listening on D-Bus System Message Bus Socket.
	2021-08-17T00:11:38.384399100Z           Starting Docker Socket for the API.
	2021-08-17T00:11:38.386150600Z           Starting Podman API Socket.
	2021-08-17T00:11:38.401086700Z  [  OK  ] Listening on Podman API Socket.
	2021-08-17T00:11:38.404581900Z  [  OK  ] Listening on Docker Socket for the API.
	2021-08-17T00:11:38.404604500Z  [  OK  ] Reached target Sockets.
	2021-08-17T00:11:38.404610900Z  [  OK  ] Reached target Basic System.
	2021-08-17T00:11:38.406960700Z           Starting containerd container runtime...
	2021-08-17T00:11:38.419473300Z  [  OK  ] Started D-Bus System Message Bus.
	2021-08-17T00:11:38.440784900Z           Starting minikube automount...
	2021-08-17T00:11:38.454375500Z           Starting OpenBSD Secure Shell server...
	2021-08-17T00:11:38.612632600Z  [  OK  ] Started OpenBSD Secure Shell server.
	2021-08-17T00:11:38.711655100Z  [  OK  ] Finished minikube automount.
	2021-08-17T00:11:38.999486100Z  [  OK  ] Started containerd container runtime.
	2021-08-17T00:11:38.999592200Z           Starting Docker Application Container Engine...
	2021-08-17T00:11:40.529098100Z  [  OK  ] Started Docker Application Container Engine.
	2021-08-17T00:11:40.532724200Z  [  OK  ] Reached target Multi-User System.
	2021-08-17T00:11:40.533061900Z  [  OK  ] Reached target Graphical Interface.
	2021-08-17T00:11:40.543638200Z           Starting Update UTMP about System Runlevel Changes...
	2021-08-17T00:11:40.580073200Z  [  OK  ] Finished Update UTMP about System Runlevel Changes.
	2021-08-17T00:14:52.425468600Z  [  OK  ] Stopped target Graphical Interface.
	2021-08-17T00:14:52.432382900Z  [  OK  ] Stopped target Multi-User System.
	2021-08-17T00:14:52.456562000Z  [  OK  ] Stopped target Timers.
	2021-08-17T00:14:52.458287200Z  [  OK  ] Stopped Daily Cleanup of Temporary Directories.
	2021-08-17T00:14:52.468150000Z           Stopping D-Bus System Message Bus...
	2021-08-17T00:14:52.469347700Z           Stopping Docker Application Container Engine...
	2021-08-17T00:14:52.472843700Z           Stopping kubelet: The Kubernetes Node Agent...
	2021-08-17T00:14:52.476889500Z           Stopping OpenBSD Secure Shell server...
	2021-08-17T00:14:52.566976300Z  [  OK  ] Stopped D-Bus System Message Bus.
	2021-08-17T00:14:52.640275500Z  [  OK  ] Stopped OpenBSD Secure Shell server.
	2021-08-17T00:14:53.071671600Z  [  OK  ] Stopped kubelet: The Kubernetes Node Agent.
	2021-08-17T00:14:54.498123800Z  [  OK  ] Unmounted /var/lib/docker/…69462313694250fe592391/merged.
	2021-08-17T00:14:54.912411600Z  [  OK  ] Unmounted /var/lib/docker/…1f94095558e9f6bd89/mounts/shm.
	2021-08-17T00:14:54.925199900Z  [  OK  ] Unmounted /var/lib/docker/…1c8f7f35f07c7a6a8d0ce2/merged.
	2021-08-17T00:14:54.932485000Z  [  OK  ] Unmounted /var/lib/docker/…47073a1711ef4c4e04e0b5/merged.
	2021-08-17T00:14:55.209232000Z  [  OK  ] Unmounted /var/lib/docker/…1bfd12ed1476983fc8/mounts/shm.
	2021-08-17T00:14:55.214209600Z  [  OK  ] Unmounted /var/lib/docker/…7e5e9d16f256d9c8cc0b33/merged.
	2021-08-17T00:14:55.274592100Z  [  OK  ] Unmounted /var/lib/docker/…f83fb2d6dd91a50660/mounts/shm.
	2021-08-17T00:14:55.284562700Z  [  OK  ] Unmounted /var/lib/docker/…91cabb007203dee80b6b12/merged.
	2021-08-17T00:14:55.352113800Z  [  OK  ] Unmounted /var/lib/docker/…e14bdbf4dd511904da/mounts/shm.
	2021-08-17T00:14:55.402188500Z  [  OK  ] Unmounted /var/lib/docker/…c8b9de2d1971e12bb86986/merged.
	2021-08-17T00:14:55.403292600Z  [  OK  ] Unmounted /var/lib/docker/…d1bdb89458dcaf057e32c9/merged.
	2021-08-17T00:14:57.622429600Z  [***   ] A stop job is running for Docker Ap…n Container Engine (5s / 1min 30s)
	2021-08-17T00:14:58.058978800Z  M
[ ***  ] A stop job is running for Docker Ap…Container Engine (41us / 1min 24s)
	2021-08-17T00:14:58.615189700Z  M
[  *** ] A stop job is running for Docker Ap…ontainer Engine (557ms / 1min 24s)
	2021-08-17T00:14:59.116322200Z  M
[   ***] A stop job is running for Docker Ap…n Container Engine (1s / 1min 24s)
	2021-08-17T00:14:59.618891500Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (1s / 1min 24s)
	2021-08-17T00:15:00.113894400Z  M
[     *] A stop job is running for Docker Ap…n Container Engine (2s / 1min 24s)
	2021-08-17T00:15:00.614658600Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (2s / 1min 24s)
	2021-08-17T00:15:01.114774800Z  M
[   ***] A stop job is running for Docker Ap…n Container Engine (3s / 1min 24s)
	2021-08-17T00:15:01.613082700Z  M
[  *** ] A stop job is running for Docker Ap…n Container Engine (3s / 1min 24s)
	2021-08-17T00:15:02.113879500Z  M
[ ***  ] A stop job is running for Docker Ap…n Container Engine (4s / 1min 24s)
	2021-08-17T00:15:02.613953700Z  M
[***   ] A stop job is running for Docker Ap…n Container Engine (4s / 1min 24s)
	2021-08-17T00:15:03.116308700Z  M
[**    ] A stop job is running for Docker Ap…n Container Engine (5s / 1min 24s)
	2021-08-17T00:15:03.614219200Z  M
[*     ] A stop job is running for Docker Ap…n Container Engine (5s / 1min 24s)
	2021-08-17T00:15:04.112928000Z  M
[**    ] A stop job is running for Docker Ap…n Container Engine (6s / 1min 24s)
	2021-08-17T00:15:04.428470200Z  M
[  OK  ] Unmounted /var/lib/docker/…6c9d50df4f7ca1e03a3c3e/merged.
	2021-08-17T00:15:04.594418200Z  [  OK  ] Stopped Docker Application Container Engine.
	2021-08-17T00:15:04.595705200Z  [  OK  ] Stopped target Network is Online.
	2021-08-17T00:15:04.596294100Z           Stopping containerd container runtime...
	2021-08-17T00:15:04.612847000Z  [  OK  ] Stopped minikube automount.
	2021-08-17T00:15:04.724610100Z  [  OK  ] Stopped containerd container runtime.
	2021-08-17T00:15:04.724990400Z  [  OK  ] Stopped target Basic System.
	2021-08-17T00:15:04.725528300Z  [  OK  ] Stopped target Paths.
	2021-08-17T00:15:04.725757700Z  [  OK  ] Stopped target Slices.
	2021-08-17T00:15:04.725771200Z  [  OK  ] Stopped target Sockets.
	2021-08-17T00:15:04.727308000Z  [  OK  ] Closed D-Bus System Message Bus Socket.
	2021-08-17T00:15:04.728278400Z  [  OK  ] Closed Docker Socket for the API.
	2021-08-17T00:15:04.729466300Z  [  OK  ] Closed Podman API Socket.
	2021-08-17T00:15:04.729484700Z  [  OK  ] Stopped target System Initialization.
	2021-08-17T00:15:04.729491300Z  [  OK  ] Stopped target Local Encrypted Volumes.
	2021-08-17T00:15:04.751858900Z  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
	2021-08-17T00:15:04.752502000Z  [  OK  ] Stopped target Local File Systems.
	2021-08-17T00:15:04.754603800Z           Unmounting /data...
	2021-08-17T00:15:04.762752800Z           Unmounting /etc/hostname...
	2021-08-17T00:15:04.765363000Z           Unmounting /etc/hosts...
	2021-08-17T00:15:04.782567900Z           Unmounting /etc/resolv.conf...
	2021-08-17T00:15:04.786597000Z           Unmounting /kind/product_uuid...
	2021-08-17T00:15:04.809592600Z           Unmounting /run/docker/netns/default...
	2021-08-17T00:15:04.811643500Z           Unmounting /tmp/hostpath-provisioner...
	2021-08-17T00:15:04.819730400Z           Unmounting /tmp/hostpath_pv...
	2021-08-17T00:15:04.831370000Z           Unmounting /usr/lib/modules...
	2021-08-17T00:15:04.835681500Z  [  OK  ] Stopped Apply Kernel Variables.
	2021-08-17T00:15:04.838422200Z           Stopping Update UTMP about System Boot/Shutdown...
	2021-08-17T00:15:04.873429100Z  [  OK  ] Unmounted /data.
	2021-08-17T00:15:04.883491300Z  [  OK  ] Unmounted /etc/hosts.
	2021-08-17T00:15:04.889932400Z  [  OK  ] Unmounted /etc/resolv.conf.
	2021-08-17T00:15:04.900479100Z  [  OK  ] Unmounted /tmp/hostpath_pv.
	2021-08-17T00:15:04.912907100Z  [  OK  ] Stopped Update UTMP about System Boot/Shutdown.
	2021-08-17T00:15:04.917671700Z           Unmounting /var...
	2021-08-17T00:15:04.982240100Z  [  OK  ] Unmounted /etc/hostname.
	2021-08-17T00:15:04.984673000Z  [  OK  ] Unmounted /kind/product_uuid.
	2021-08-17T00:15:04.987450500Z  [  OK  ] Unmounted /run/docker/netns/default.
	2021-08-17T00:15:04.992757200Z  [  OK  ] Unmounted /tmp/hostpath-provisioner.
	2021-08-17T00:15:04.997335400Z  [  OK  ] Unmounted /usr/lib/modules.
	2021-08-17T00:15:05.000172400Z  [  OK  ] Unmounted /var.
	2021-08-17T00:15:05.000873700Z           Unmounting /tmp...
	2021-08-17T00:15:05.093323100Z  [  OK  ] Unmounted /tmp.
	2021-08-17T00:15:05.094269600Z  [  OK  ] Stopped target Local File Systems (Pre).
	2021-08-17T00:15:05.094292700Z  [  OK  ] Stopped target Swap.
	2021-08-17T00:15:05.094298800Z  [  OK  ] Reached target Unmount All Filesystems.
	2021-08-17T00:15:05.098706600Z  [  OK  ] Stopped Create Static Device Nodes in /dev.
	2021-08-17T00:15:05.100442100Z  [  OK  ] Stopped Create System Users.
	2021-08-17T00:15:05.101848700Z  [  OK  ] Stopped Remount Root and Kernel File Systems.
	2021-08-17T00:15:05.101870300Z  [  OK  ] Reached target Shutdown.
	2021-08-17T00:15:05.101878100Z  [  OK  ] Reached target Final Step.
	2021-08-17T00:15:05.128799300Z           Starting Halt...
	2021-08-17T00:15:05.140314800Z  [  OK  ] Finished Power-Off.
	2021-08-17T00:15:05.141855300Z  [  OK  ] Reached target Power-Off.
	
	-- /stdout --
	I0817 00:15:34.630096  106960 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:15:35.460361  106960 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:64 SystemTime:2021-08-17 00:15:35.0614304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:15:35.460630  106960 errors.go:98] postmortem docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:64 SystemTime:2021-08-17 00:15:35.0614304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:15:35.467110  106960 network_create.go:255] running [docker network inspect kubernetes-upgrade-20210817001119-111344] to gather additional debugging logs...
	I0817 00:15:35.467446  106960 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210817001119-111344
	W0817 00:15:35.964538  106960 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	I0817 00:15:35.964538  106960 network_create.go:258] error running [docker network inspect kubernetes-upgrade-20210817001119-111344]: docker network inspect kubernetes-upgrade-20210817001119-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20210817001119-111344
	I0817 00:15:35.964538  106960 network_create.go:260] output of [docker network inspect kubernetes-upgrade-20210817001119-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20210817001119-111344
	
	** /stderr **
	I0817 00:15:35.970760  106960 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:15:36.783738  106960 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:64 SystemTime:2021-08-17 00:15:36.4498245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:15:36.794932  106960 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210817001119-111344
	I0817 00:15:37.323701  106960 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\kubernetes-upgrade-20210817001119-111344\config.json ...
	I0817 00:15:37.349192  106960 machine.go:88] provisioning docker machine ...
	I0817 00:15:37.349192  106960 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20210817001119-111344"
	I0817 00:15:37.358157  106960 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344
	W0817 00:15:37.891725  106960 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	I0817 00:15:37.891725  106960 machine.go:91] provisioned docker machine in 542.512ms
	I0817 00:15:37.900842  106960 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:15:37.909470  106960 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344
	W0817 00:15:38.415477  106960 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	I0817 00:15:38.415477  106960 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:15:38.657612  106960 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344
	W0817 00:15:39.123180  106960 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	I0817 00:15:39.123599  106960 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:15:39.478219  106960 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344
	W0817 00:15:39.998170  106960 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210817001119-111344 returned with exit code 1
	W0817 00:15:39.998684  106960 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0817 00:15:39.998684  106960 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:15:39.998684  106960 fix.go:57] fixHost completed within 7.4939072s
	I0817 00:15:39.998684  106960 start.go:80] releasing machines lock for "kubernetes-upgrade-20210817001119-111344", held for 7.4939072s
	W0817 00:15:39.999185  106960 out.go:242] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20210817001119-111344" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20210817001119-111344" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:15:40.001822  106960 out.go:177] 
	W0817 00:15:40.002192  106960 out.go:242] X Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.
	X Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.
	I0817 00:15:40.003865  106960 out.go:177] 

                                                
                                                
** /stderr **
E0817 00:15:40.814714  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
version_upgrade_test.go:247: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210817001119-111344 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker : exit status 80
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210817001119-111344 version --output=json
version_upgrade_test.go:250: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20210817001119-111344 version --output=json: exit status 1 (173.4148ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-20210817001119-111344" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:252: error running kubectl: exit status 1
panic.go:613: *** TestKubernetesUpgrade FAILED at 2021-08-17 00:15:41.0042835 +0000 GMT m=+3998.095187501
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect kubernetes-upgrade-20210817001119-111344
helpers_test.go:236: (dbg) docker inspect kubernetes-upgrade-20210817001119-111344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da",
	        "Created": "2021-08-17T00:11:33.2667132Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network 76fe7f8c8a06cbce45c22be0496564774e7468502c41c92f5937f12e17cdef08 not found",
	            "StartedAt": "2021-08-17T00:11:35.924797Z",
	            "FinishedAt": "2021-08-17T00:15:05.3732415Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/hostname",
	        "HostsPath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/hosts",
	        "LogPath": "/var/lib/docker/containers/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da/2c252d82645a9040b4745f3de0dcfb840d2ae7a85ef0a7fe7960ed92d23b34da-json.log",
	        "Name": "/kubernetes-upgrade-20210817001119-111344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20210817001119-111344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20210817001119-111344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53-init/diff:/var/lib/docker/overlay2/e167e57d4b442602b2435f5ffd2147b1da53de34df49d96ce69565867fcf3850/diff:/var/lib/docker/overlay2/dbfef15a73962254d5bcc2c91a409021fc3573c3135096621d707c6f4feaac7d/diff:/var/lib/docker/overlay2/7fc44848dc580276135d9db2b62ce047cfba1909de5e91acbe8c1a5fc8fb3649/diff:/var/lib/docker/overlay2/493996ff2d6a75ef70db2749dded6936397fe536c32e28dda979b8af93e19f13/diff:/var/lib/docker/overlay2/b862553905dec6f42a41351a012fdce386251d97160f74f6b1feb3b455e1f53a/diff:/var/lib/docker/overlay2/517a8b2830d9e81ff950c8305063a6681219abbb7b22f3a87587fa819a0728ed/diff:/var/lib/docker/overlay2/f2b268080cfd9bbb64731ea6b7cb2ec64077e6c2701c2ab6e8b358a541056c5d/diff:/var/lib/docker/overlay2/ee5e612696333c681900cad605a1f678e9114e9c7ecf70717fad21aea1e52992/diff:/var/lib/docker/overlay2/6f44289af0b09a02645c237aabeff61487c57040b9531c0f7bd97517308bfd57/diff:/var/lib/docker/overlay2/f98f67
21a411bacf9d310d4d4405fbd528fa90d60af5ffabda9d55cef9ef3033/diff:/var/lib/docker/overlay2/8bc2e0f6b7c2aeccc6a944f316dbac5672f8685cc5dd5d3c2fc4bd370db4949f/diff:/var/lib/docker/overlay2/ef9e793c1e243004ff088f210369994837eb19a8abd21cf93f75257155445f16/diff:/var/lib/docker/overlay2/48fa7f37fc37f8220a31f4294bc800ef7a33c53c10bdc23d7dc68f27cfe4e535/diff:/var/lib/docker/overlay2/54bc5e0e0c32fdc66ce3eeb345721201a63a0c878d4665607246cd4aa5af61e5/diff:/var/lib/docker/overlay2/398c3fc63254fcc564086ced0eb7211f2d474f8bbdcd43ee27fd609e767c44a6/diff:/var/lib/docker/overlay2/796acb5b93384da004a8065a332cbb07c952569bdd7bb5e551b218e4c5c61f73/diff:/var/lib/docker/overlay2/d90baef87ad95bdfb14a2f35e4cb62336e18c21eb934266f43bfbe017252b857/diff:/var/lib/docker/overlay2/c16752decc8ef06fce4eebdf4ff4725414f3aa80cccd7b3ffdc325095930c0b4/diff:/var/lib/docker/overlay2/a679084eec181b0e1408e573d1ac08c47af1fd8266eb5884bf1a38d5ba0ddbbc/diff:/var/lib/docker/overlay2/15becb79b0d40211562ae33ddc5ec776276b9ae42c8a9f4645dcc6442b36f771/diff:/var/lib/d
ocker/overlay2/068a9a5dce1094eb72788237bd9cda4c76345774d5e647f0af81302a75861f4a/diff:/var/lib/docker/overlay2/74b9e9d807e09734ee96c76bc67adc56c9e3286b39315f89f6747c8c917ad2e5/diff:/var/lib/docker/overlay2/75de8e4895a0b4efe563705c06184db384b5c40154856b9bca2106a8d59fc151/diff:/var/lib/docker/overlay2/cbca3c40b21fee2ef276744168492f17203934aca8de4b459edae2fa55b6bb02/diff:/var/lib/docker/overlay2/584d28a6308bb998bd89d7d92c45b57b9dd66de472d166972d2f5195afd9dd44/diff:/var/lib/docker/overlay2/9c722118749c036eb2d00ba5a6925c5f32b121d64974c99e2de552b26a8bb7cd/diff:/var/lib/docker/overlay2/24908c792743f57c182587c66263f074ed86ae7c5812c631dea82d8ec6650e81/diff:/var/lib/docker/overlay2/9a8de59bfb816b3fc2f0fd522ef966196534483b5e87aafd180dd8b07e9c3582/diff:/var/lib/docker/overlay2/df46d170084213da519dea7e0f402d51272dc10df4d7cd7f37c528c411ac7000/diff:/var/lib/docker/overlay2/36b86a6f515e5882426e598755bb77d43cc340fd20798dfd0a810cd2ab96eeb6/diff:/var/lib/docker/overlay2/b54ac02f70047359cd143a32f862d18498cb556877ccfd252defb9d17fc
9d9f5/diff:/var/lib/docker/overlay2/971b77d080920997e1d0d0936f312a9a322ccd6ab9920c83a8eb5d14b93c3849/diff:/var/lib/docker/overlay2/5b5c21ae360c7e0738c0048bc3fe8d7d3cc0640d266660121f3968f675f42063/diff:/var/lib/docker/overlay2/e07bf2561a99ba47435b8f84b267268e02e9e4ff47832bd5054ee28bb1ca5001/diff:/var/lib/docker/overlay2/0c560be48f01814af21ec54fc79ea5e8db28f05e967a17b331be28ad61c75483/diff:/var/lib/docker/overlay2/27930667f3fd0fd38c13a39c0590c03a2c3b3ba04f0a3c946167be6a40f50c46/diff",
	                "MergedDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53/merged",
	                "UpperDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53/diff",
	                "WorkDir": "/var/lib/docker/overlay2/afe12c7d243f8c21e4659e09f7081dd299d594587a068386ae355047ce465d53/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20210817001119-111344",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20210817001119-111344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20210817001119-111344",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20210817001119-111344",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20210817001119-111344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e7fda3ade8792c0aba0069fda9edea4aa344e0c9943b3ec20c40c66d95a42b9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/5e7fda3ade87",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20210817001119-111344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2c252d82645a",
	                        "kubernetes-upgrade-20210817001119-111344"
	                    ],
	                    "NetworkID": "76fe7f8c8a06cbce45c22be0496564774e7468502c41c92f5937f12e17cdef08",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20210817001119-111344 -n kubernetes-upgrade-20210817001119-111344
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20210817001119-111344 -n kubernetes-upgrade-20210817001119-111344: exit status 7 (2.2102426s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "kubernetes-upgrade-20210817001119-111344" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210817001119-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20210817001119-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20210817001119-111344: (12.4572184s)
--- FAIL: TestKubernetesUpgrade (276.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (21.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20210817001119-111344

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:208: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20210817001119-111344: exit status 110 (20.0018689s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------------------------------------|--------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	|  Command   |                             Args                             |                  Profile                   |          User           | Version |          Start Time           |           End Time            |
	|------------|--------------------------------------------------------------|--------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete     | -p                                                           | docker-network-20210816233948-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:41:27 GMT | Mon, 16 Aug 2021 23:41:36 GMT |
	|            | docker-network-20210816233948-111344                         |                                            |                         |         |                               |                               |
	| start      | -p                                                           | existing-network-20210816234138-111344     | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:41:38 GMT | Mon, 16 Aug 2021 23:43:13 GMT |
	|            | existing-network-20210816234138-111344                       |                                            |                         |         |                               |                               |
	|            | --network=existing-network                                   |                                            |                         |         |                               |                               |
	| delete     | -p                                                           | existing-network-20210816234138-111344     | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:43:13 GMT | Mon, 16 Aug 2021 23:43:23 GMT |
	|            | existing-network-20210816234138-111344                       |                                            |                         |         |                               |                               |
	| start      | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:43:24 GMT | Mon, 16 Aug 2021 23:47:03 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | --wait=true --memory=2200                                    |                                            |                         |         |                               |                               |
	|            | --nodes=2 -v=8                                               |                                            |                         |         |                               |                               |
	|            | --alsologtostderr                                            |                                            |                         |         |                               |                               |
	|            | --driver=docker                                              |                                            |                         |         |                               |                               |
	| kubectl    | -p multinode-20210816234324-111344 -- apply -f               | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:09 GMT | Mon, 16 Aug 2021 23:47:11 GMT |
	|            | ./testdata/multinodes/multinode-pod-dns-test.yaml            |                                            |                         |         |                               |                               |
	| kubectl    | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:12 GMT | Mon, 16 Aug 2021 23:47:16 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | -- rollout status                                            |                                            |                         |         |                               |                               |
	|            | deployment/busybox                                           |                                            |                         |         |                               |                               |
	| kubectl    | -p multinode-20210816234324-111344                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:16 GMT | Mon, 16 Aug 2021 23:47:18 GMT |
	|            | -- get pods -o                                               |                                            |                         |         |                               |                               |
	|            | jsonpath='{.items[*].status.podIP}'                          |                                            |                         |         |                               |                               |
	| kubectl    | -p multinode-20210816234324-111344                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:18 GMT | Mon, 16 Aug 2021 23:47:19 GMT |
	|            | -- get pods -o                                               |                                            |                         |         |                               |                               |
	|            | jsonpath='{.items[*].metadata.name}'                         |                                            |                         |         |                               |                               |
	| kubectl    | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:20 GMT | Mon, 16 Aug 2021 23:47:23 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | -- exec                                                      |                                            |                         |         |                               |                               |
	|            | busybox-84b6686758-8c8vg --                                  |                                            |                         |         |                               |                               |
	|            | nslookup kubernetes.io                                       |                                            |                         |         |                               |                               |
	| kubectl    | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:23 GMT | Mon, 16 Aug 2021 23:47:26 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | -- exec                                                      |                                            |                         |         |                               |                               |
	|            | busybox-84b6686758-zx8lt --                                  |                                            |                         |         |                               |                               |
	|            | nslookup kubernetes.io                                       |                                            |                         |         |                               |                               |
	| kubectl    | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:26 GMT | Mon, 16 Aug 2021 23:47:28 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | -- exec                                                      |                                            |                         |         |                               |                               |
	|            | busybox-84b6686758-8c8vg --                                  |                                            |                         |         |                               |                               |
	|            | nslookup kubernetes.default                                  |                                            |                         |         |                               |                               |
	| kubectl    | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:28 GMT | Mon, 16 Aug 2021 23:47:30 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | -- exec                                                      |                                            |                         |         |                               |                               |
	|            | busybox-84b6686758-zx8lt --                                  |                                            |                         |         |                               |                               |
	|            | nslookup kubernetes.default                                  |                                            |                         |         |                               |                               |
	| kubectl    | -p multinode-20210816234324-111344                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:30 GMT | Mon, 16 Aug 2021 23:47:32 GMT |
	|            | -- exec busybox-84b6686758-8c8vg                             |                                            |                         |         |                               |                               |
	|            | -- nslookup                                                  |                                            |                         |         |                               |                               |
	|            | kubernetes.default.svc.cluster.local                         |                                            |                         |         |                               |                               |
	| kubectl    | -p multinode-20210816234324-111344                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:32 GMT | Mon, 16 Aug 2021 23:47:34 GMT |
	|            | -- exec busybox-84b6686758-zx8lt                             |                                            |                         |         |                               |                               |
	|            | -- nslookup                                                  |                                            |                         |         |                               |                               |
	|            | kubernetes.default.svc.cluster.local                         |                                            |                         |         |                               |                               |
	| kubectl    | -p multinode-20210816234324-111344                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:34 GMT | Mon, 16 Aug 2021 23:47:36 GMT |
	|            | -- get pods -o                                               |                                            |                         |         |                               |                               |
	|            | jsonpath='{.items[*].metadata.name}'                         |                                            |                         |         |                               |                               |
	| kubectl    | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:36 GMT | Mon, 16 Aug 2021 23:47:38 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | -- exec                                                      |                                            |                         |         |                               |                               |
	|            | busybox-84b6686758-8c8vg                                     |                                            |                         |         |                               |                               |
	|            | -- sh -c nslookup                                            |                                            |                         |         |                               |                               |
	|            | host.minikube.internal | awk                                 |                                            |                         |         |                               |                               |
	|            | 'NR==5' | cut -d' ' -f3                                      |                                            |                         |         |                               |                               |
	| kubectl    | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:38 GMT | Mon, 16 Aug 2021 23:47:40 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | -- exec                                                      |                                            |                         |         |                               |                               |
	|            | busybox-84b6686758-8c8vg -- sh                               |                                            |                         |         |                               |                               |
	|            | -c ping -c 1 192.168.65.2                                    |                                            |                         |         |                               |                               |
	| kubectl    | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:40 GMT | Mon, 16 Aug 2021 23:47:42 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | -- exec                                                      |                                            |                         |         |                               |                               |
	|            | busybox-84b6686758-zx8lt                                     |                                            |                         |         |                               |                               |
	|            | -- sh -c nslookup                                            |                                            |                         |         |                               |                               |
	|            | host.minikube.internal | awk                                 |                                            |                         |         |                               |                               |
	|            | 'NR==5' | cut -d' ' -f3                                      |                                            |                         |         |                               |                               |
	| kubectl    | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:42 GMT | Mon, 16 Aug 2021 23:47:44 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | -- exec                                                      |                                            |                         |         |                               |                               |
	|            | busybox-84b6686758-zx8lt -- sh                               |                                            |                         |         |                               |                               |
	|            | -c ping -c 1 192.168.65.2                                    |                                            |                         |         |                               |                               |
	| node       | add -p                                                       | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:47:44 GMT | Mon, 16 Aug 2021 23:48:59 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | -v 3 --alsologtostderr                                       |                                            |                         |         |                               |                               |
	| profile    | list --output json                                           | minikube                                   | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:49:06 GMT | Mon, 16 Aug 2021 23:49:10 GMT |
	| -p         | multinode-20210816234324-111344                              | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:49:17 GMT | Mon, 16 Aug 2021 23:49:19 GMT |
	|            | cp testdata\cp-test.txt                                      |                                            |                         |         |                               |                               |
	|            | /home/docker/cp-test.txt                                     |                                            |                         |         |                               |                               |
	| -p         | multinode-20210816234324-111344                              | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:49:20 GMT | Mon, 16 Aug 2021 23:49:23 GMT |
	|            | ssh sudo cat                                                 |                                            |                         |         |                               |                               |
	|            | /home/docker/cp-test.txt                                     |                                            |                         |         |                               |                               |
	| -p         | multinode-20210816234324-111344 cp testdata\cp-test.txt      | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:49:23 GMT | Mon, 16 Aug 2021 23:49:26 GMT |
	|            | multinode-20210816234324-111344-m02:/home/docker/cp-test.txt |                                            |                         |         |                               |                               |
	| -p         | multinode-20210816234324-111344                              | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:49:27 GMT | Mon, 16 Aug 2021 23:49:30 GMT |
	|            | ssh -n                                                       |                                            |                         |         |                               |                               |
	|            | multinode-20210816234324-111344-m02                          |                                            |                         |         |                               |                               |
	|            | sudo cat /home/docker/cp-test.txt                            |                                            |                         |         |                               |                               |
	| -p         | multinode-20210816234324-111344 cp testdata\cp-test.txt      | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:49:30 GMT | Mon, 16 Aug 2021 23:49:33 GMT |
	|            | multinode-20210816234324-111344-m03:/home/docker/cp-test.txt |                                            |                         |         |                               |                               |
	| -p         | multinode-20210816234324-111344                              | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:49:34 GMT | Mon, 16 Aug 2021 23:49:37 GMT |
	|            | ssh -n                                                       |                                            |                         |         |                               |                               |
	|            | multinode-20210816234324-111344-m03                          |                                            |                         |         |                               |                               |
	|            | sudo cat /home/docker/cp-test.txt                            |                                            |                         |         |                               |                               |
	| -p         | multinode-20210816234324-111344                              | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:49:37 GMT | Mon, 16 Aug 2021 23:49:42 GMT |
	|            | node stop m03                                                |                                            |                         |         |                               |                               |
	| -p         | multinode-20210816234324-111344                              | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:49:53 GMT | Mon, 16 Aug 2021 23:50:25 GMT |
	|            | node start m03                                               |                                            |                         |         |                               |                               |
	|            | --alsologtostderr                                            |                                            |                         |         |                               |                               |
	| stop       | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:50:33 GMT | Mon, 16 Aug 2021 23:51:04 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	| start      | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:51:04 GMT | Mon, 16 Aug 2021 23:55:16 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | --wait=true -v=8                                             |                                            |                         |         |                               |                               |
	|            | --alsologtostderr                                            |                                            |                         |         |                               |                               |
	| -p         | multinode-20210816234324-111344                              | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:55:16 GMT | Mon, 16 Aug 2021 23:55:34 GMT |
	|            | node delete m03                                              |                                            |                         |         |                               |                               |
	| -p         | multinode-20210816234324-111344                              | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:55:40 GMT | Mon, 16 Aug 2021 23:56:08 GMT |
	|            | stop                                                         |                                            |                         |         |                               |                               |
	| start      | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:56:13 GMT | Mon, 16 Aug 2021 23:58:55 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	|            | --wait=true -v=8                                             |                                            |                         |         |                               |                               |
	|            | --alsologtostderr                                            |                                            |                         |         |                               |                               |
	|            | --driver=docker                                              |                                            |                         |         |                               |                               |
	| start      | -p                                                           | multinode-20210816234324-111344-m03        | WINDOWS-SERVER-\jenkins | v1.22.0 | Mon, 16 Aug 2021 23:59:02 GMT | Tue, 17 Aug 2021 00:00:48 GMT |
	|            | multinode-20210816234324-111344-m03                          |                                            |                         |         |                               |                               |
	|            | --driver=docker                                              |                                            |                         |         |                               |                               |
	| delete     | -p                                                           | multinode-20210816234324-111344-m03        | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:00:54 GMT | Tue, 17 Aug 2021 00:01:07 GMT |
	|            | multinode-20210816234324-111344-m03                          |                                            |                         |         |                               |                               |
	| delete     | -p                                                           | multinode-20210816234324-111344            | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:01:07 GMT | Tue, 17 Aug 2021 00:01:27 GMT |
	|            | multinode-20210816234324-111344                              |                                            |                         |         |                               |                               |
	| start      | -p                                                           | test-preload-20210817000127-111344         | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:01:28 GMT | Tue, 17 Aug 2021 00:03:42 GMT |
	|            | test-preload-20210817000127-111344                           |                                            |                         |         |                               |                               |
	|            | --memory=2200 --alsologtostderr                              |                                            |                         |         |                               |                               |
	|            | --wait=true --preload=false                                  |                                            |                         |         |                               |                               |
	|            | --driver=docker                                              |                                            |                         |         |                               |                               |
	|            | --kubernetes-version=v1.17.0                                 |                                            |                         |         |                               |                               |
	| ssh        | -p                                                           | test-preload-20210817000127-111344         | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:03:42 GMT | Tue, 17 Aug 2021 00:03:48 GMT |
	|            | test-preload-20210817000127-111344                           |                                            |                         |         |                               |                               |
	|            | -- docker pull busybox                                       |                                            |                         |         |                               |                               |
	| start      | -p                                                           | test-preload-20210817000127-111344         | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:03:48 GMT | Tue, 17 Aug 2021 00:05:12 GMT |
	|            | test-preload-20210817000127-111344                           |                                            |                         |         |                               |                               |
	|            | --memory=2200 --alsologtostderr                              |                                            |                         |         |                               |                               |
	|            | -v=1 --wait=true --driver=docker                             |                                            |                         |         |                               |                               |
	|            | --kubernetes-version=v1.17.3                                 |                                            |                         |         |                               |                               |
	| ssh        | -p                                                           | test-preload-20210817000127-111344         | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:05:13 GMT | Tue, 17 Aug 2021 00:05:16 GMT |
	|            | test-preload-20210817000127-111344                           |                                            |                         |         |                               |                               |
	|            | -- docker images                                             |                                            |                         |         |                               |                               |
	| delete     | -p                                                           | test-preload-20210817000127-111344         | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:05:17 GMT | Tue, 17 Aug 2021 00:05:28 GMT |
	|            | test-preload-20210817000127-111344                           |                                            |                         |         |                               |                               |
	| start      | -p                                                           | scheduled-stop-20210817000528-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:05:28 GMT | Tue, 17 Aug 2021 00:07:06 GMT |
	|            | scheduled-stop-20210817000528-111344                         |                                            |                         |         |                               |                               |
	|            | --memory=2048 --driver=docker                                |                                            |                         |         |                               |                               |
	| stop       | -p                                                           | scheduled-stop-20210817000528-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:07:06 GMT | Tue, 17 Aug 2021 00:07:09 GMT |
	|            | scheduled-stop-20210817000528-111344                         |                                            |                         |         |                               |                               |
	|            | --schedule 5m                                                |                                            |                         |         |                               |                               |
	| ssh        | -p                                                           | scheduled-stop-20210817000528-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:07:13 GMT | Tue, 17 Aug 2021 00:07:16 GMT |
	|            | scheduled-stop-20210817000528-111344                         |                                            |                         |         |                               |                               |
	|            | -- sudo systemctl show                                       |                                            |                         |         |                               |                               |
	|            | minikube-scheduled-stop --no-page                            |                                            |                         |         |                               |                               |
	| stop       | -p                                                           | scheduled-stop-20210817000528-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:07:17 GMT | Tue, 17 Aug 2021 00:07:20 GMT |
	|            | scheduled-stop-20210817000528-111344                         |                                            |                         |         |                               |                               |
	|            | --schedule 5s                                                |                                            |                         |         |                               |                               |
	| delete     | -p                                                           | scheduled-stop-20210817000528-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:07:39 GMT | Tue, 17 Aug 2021 00:07:49 GMT |
	|            | scheduled-stop-20210817000528-111344                         |                                            |                         |         |                               |                               |
	| start      | -p                                                           | skaffold-20210817000749-111344             | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:07:50 GMT | Tue, 17 Aug 2021 00:09:30 GMT |
	|            | skaffold-20210817000749-111344                               |                                            |                         |         |                               |                               |
	|            | --memory=2600 --driver=docker                                |                                            |                         |         |                               |                               |
	| docker-env | --shell none -p                                              | skaffold-20210817000749-111344             | skaffold                | v1.22.0 | Tue, 17 Aug 2021 00:09:33 GMT | Tue, 17 Aug 2021 00:09:38 GMT |
	|            | skaffold-20210817000749-111344                               |                                            |                         |         |                               |                               |
	|            | --user=skaffold                                              |                                            |                         |         |                               |                               |
	| delete     | -p                                                           | skaffold-20210817000749-111344             | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:10:30 GMT | Tue, 17 Aug 2021 00:10:44 GMT |
	|            | skaffold-20210817000749-111344                               |                                            |                         |         |                               |                               |
	| delete     | -p                                                           | insufficient-storage-20210817001044-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:11:11 GMT | Tue, 17 Aug 2021 00:11:19 GMT |
	|            | insufficient-storage-20210817001044-111344                   |                                            |                         |         |                               |                               |
	| start      | -p                                                           | kubernetes-upgrade-20210817001119-111344   | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:11:19 GMT | Tue, 17 Aug 2021 00:14:47 GMT |
	|            | kubernetes-upgrade-20210817001119-111344                     |                                            |                         |         |                               |                               |
	|            | --memory=2200                                                |                                            |                         |         |                               |                               |
	|            | --kubernetes-version=v1.14.0                                 |                                            |                         |         |                               |                               |
	|            | --alsologtostderr -v=1 --driver=docker                       |                                            |                         |         |                               |                               |
	| start      | -p                                                           | force-systemd-env-20210817001119-111344    | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:11:19 GMT | Tue, 17 Aug 2021 00:14:48 GMT |
	|            | force-systemd-env-20210817001119-111344                      |                                            |                         |         |                               |                               |
	|            | --memory=2048 --alsologtostderr -v=5                         |                                            |                         |         |                               |                               |
	|            | --driver=docker                                              |                                            |                         |         |                               |                               |
	| -p         | force-systemd-env-20210817001119-111344                      | force-systemd-env-20210817001119-111344    | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:14:48 GMT | Tue, 17 Aug 2021 00:14:55 GMT |
	|            | ssh docker info --format                                     |                                            |                         |         |                               |                               |
	|            | {{.CgroupDriver}}                                            |                                            |                         |         |                               |                               |
	| stop       | -p                                                           | kubernetes-upgrade-20210817001119-111344   | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:14:48 GMT | Tue, 17 Aug 2021 00:15:09 GMT |
	|            | kubernetes-upgrade-20210817001119-111344                     |                                            |                         |         |                               |                               |
	| delete     | -p                                                           | force-systemd-env-20210817001119-111344    | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:14:56 GMT | Tue, 17 Aug 2021 00:15:15 GMT |
	|            | force-systemd-env-20210817001119-111344                      |                                            |                         |         |                               |                               |
	| delete     | -p                                                           | kubernetes-upgrade-20210817001119-111344   | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:15:43 GMT | Tue, 17 Aug 2021 00:15:56 GMT |
	|            | kubernetes-upgrade-20210817001119-111344                     |                                            |                         |         |                               |                               |
	| start      | -p                                                           | offline-docker-20210817001119-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:11:19 GMT | Tue, 17 Aug 2021 00:15:59 GMT |
	|            | offline-docker-20210817001119-111344                         |                                            |                         |         |                               |                               |
	|            | --alsologtostderr -v=1 --memory=2048                         |                                            |                         |         |                               |                               |
	|            | --wait=true --driver=docker                                  |                                            |                         |         |                               |                               |
	| delete     | -p                                                           | offline-docker-20210817001119-111344       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:16:00 GMT | Tue, 17 Aug 2021 00:16:18 GMT |
	|            | offline-docker-20210817001119-111344                         |                                            |                         |         |                               |                               |
	| start      | -p                                                           | stopped-upgrade-20210817001119-111344      | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:16:22 GMT | Tue, 17 Aug 2021 00:18:14 GMT |
	|            | stopped-upgrade-20210817001119-111344                        |                                            |                         |         |                               |                               |
	|            | --memory=2200 --alsologtostderr -v=1                         |                                            |                         |         |                               |                               |
	|            | --driver=docker                                              |                                            |                         |         |                               |                               |
	|------------|--------------------------------------------------------------|--------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 00:16:22
	Running on machine: windows-server-2
	Binary: Built with gc go1.16.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 00:16:22.706064   94724 out.go:298] Setting OutFile to fd 1112 ...
	I0817 00:16:22.707063   94724 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:16:22.707063   94724 out.go:311] Setting ErrFile to fd 1992...
	I0817 00:16:22.708060   94724 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:16:22.721063   94724 out.go:305] Setting JSON to false
	I0817 00:16:22.725061   94724 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8367429,"bootTime":1620791953,"procs":152,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0817 00:16:22.725061   94724 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0817 00:16:22.729073   94724 out.go:177] * [stopped-upgrade-20210817001119-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0817 00:16:22.729073   94724 notify.go:169] Checking for updates...
	I0817 00:16:22.732055   94724 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:16:22.736060   94724 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0817 00:16:22.738057   94724 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 00:16:22.738057   94724 config.go:177] Loaded profile config "stopped-upgrade-20210817001119-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0817 00:16:22.739056   94724 start_flags.go:521] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 00:16:22.480368   76328 out.go:177] * Pulling base image ...
	I0817 00:16:22.480368   76328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:16:22.480368   76328 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 00:16:22.480368   76328 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0817 00:16:22.480368   76328 cache.go:56] Caching tarball of preloaded images
	I0817 00:16:22.482378   76328 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0817 00:16:22.483351   76328 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0817 00:16:22.483351   76328 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\config.json ...
	I0817 00:16:22.483351   76328 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\config.json: {Name:mk6895553dfc2c2223edfbe57da1c5459fbe825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:16:22.995231   76328 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 00:16:22.995231   76328 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 00:16:22.995231   76328 cache.go:205] Successfully downloaded all kic artifacts
	I0817 00:16:22.995584   76328 start.go:313] acquiring machines lock for docker-flags-20210817001618-111344: {Name:mka32d6bc4edf38c40379ad21695365141f11781 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:22.995854   76328 start.go:317] acquired machines lock for "docker-flags-20210817001618-111344" in 269.4µs
	I0817 00:16:22.996045   76328 start.go:89] Provisioning new machine with config: &{Name:docker-flags-20210817001618-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:docker-flags-20210817001
618-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 00:16:22.996250   76328 start.go:126] createHost starting for "" (driver="docker")
	I0817 00:16:22.998729   76328 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0817 00:16:22.999271   76328 start.go:160] libmachine.API.Create for "docker-flags-20210817001618-111344" (driver="docker")
	I0817 00:16:22.999471   76328 client.go:168] LocalClient.Create starting
	I0817 00:16:23.000135   76328 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0817 00:16:23.000582   76328 main.go:130] libmachine: Decoding PEM data...
	I0817 00:16:23.000582   76328 main.go:130] libmachine: Parsing certificate...
	I0817 00:16:23.002673   76328 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0817 00:16:23.002871   76328 main.go:130] libmachine: Decoding PEM data...
	I0817 00:16:23.002993   76328 main.go:130] libmachine: Parsing certificate...
	I0817 00:16:23.018020   76328 cli_runner.go:115] Run: docker network inspect docker-flags-20210817001618-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 00:16:22.742069   94724 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0817 00:16:22.742069   94724 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 00:16:24.611652   94724 docker.go:132] docker version: linux-20.10.2
	I0817 00:16:24.617462   94724 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:16:25.450492   94724 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:57 SystemTime:2021-08-17 00:16:25.0829256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:16:21.682324   57864 cli_runner.go:115] Run: docker container inspect pause-20210817001556-111344 --format={{.State.Status}}
	I0817 00:16:22.242404   57864 machine.go:88] provisioning docker machine ...
	I0817 00:16:22.242404   57864 ubuntu.go:169] provisioning hostname "pause-20210817001556-111344"
	I0817 00:16:22.253461   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:22.747058   57864 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:22.747058   57864 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55136 <nil> <nil>}
	I0817 00:16:22.747058   57864 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210817001556-111344 && echo "pause-20210817001556-111344" | sudo tee /etc/hostname
	I0817 00:16:23.083951   57864 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210817001556-111344
	
	I0817 00:16:23.091187   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:23.587541   57864 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:23.588159   57864 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55136 <nil> <nil>}
	I0817 00:16:23.588159   57864 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210817001556-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210817001556-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210817001556-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 00:16:23.875151   57864 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:16:23.875509   57864 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0817 00:16:23.875509   57864 ubuntu.go:177] setting up certificates
	I0817 00:16:23.875509   57864 provision.go:83] configureAuth start
	I0817 00:16:23.882312   57864 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817001556-111344
	I0817 00:16:24.382650   57864 provision.go:138] copyHostCerts
	I0817 00:16:24.383310   57864 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0817 00:16:24.383310   57864 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0817 00:16:24.383756   57864 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0817 00:16:24.385633   57864 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0817 00:16:24.385633   57864 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0817 00:16:24.386208   57864 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0817 00:16:24.387678   57864 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0817 00:16:24.387678   57864 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0817 00:16:24.388129   57864 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0817 00:16:24.389230   57864 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-20210817001556-111344 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210817001556-111344]
	I0817 00:16:24.869474   57864 provision.go:172] copyRemoteCerts
	I0817 00:16:24.877470   57864 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 00:16:24.885010   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:25.383712   57864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55136 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210817001556-111344\id_rsa Username:docker}
	I0817 00:16:25.601349   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 00:16:25.706881   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 00:16:25.793482   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 00:16:25.875518   57864 provision.go:86] duration metric: configureAuth took 1.9999332s
	I0817 00:16:25.875518   57864 ubuntu.go:193] setting minikube options for container-runtime
	I0817 00:16:25.876371   57864 config.go:177] Loaded profile config "pause-20210817001556-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:16:25.884130   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:25.459379   94724 out.go:177] * Using the docker driver based on existing profile
	I0817 00:16:25.459896   94724 start.go:278] selected driver: docker
	I0817 00:16:25.460576   94724 start.go:751] validating driver "docker" against &{Name:stopped-upgrade-20210817001119-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210817001119-111344 Namespace: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:16:25.461569   94724 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 00:16:25.547913   94724 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:16:26.406131   94724 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:57 SystemTime:2021-08-17 00:16:26.030034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] C
lientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:16:26.406554   94724 cni.go:93] Creating CNI manager for ""
	I0817 00:16:26.406554   94724 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:16:26.406554   94724 start_flags.go:277] config:
	{Name:stopped-upgrade-20210817001119-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210817001119-111344 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:16:26.409965   94724 out.go:177] * Starting control plane node stopped-upgrade-20210817001119-111344 in cluster stopped-upgrade-20210817001119-111344
	I0817 00:16:26.409965   94724 cache.go:117] Beginning downloading kic base image for docker with docker
	I0817 00:16:26.412175   94724 out.go:177] * Pulling base image ...
	I0817 00:16:26.412390   94724 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I0817 00:16:26.412590   94724 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	W0817 00:16:26.481816   94724 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.18.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0817 00:16:26.482088   94724 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\config.json ...
	I0817 00:16:26.482686   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.18.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0
	I0817 00:16:26.482686   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0817 00:16:26.482686   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0
	I0817 00:16:26.482999   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.18.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0
	I0817 00:16:26.482686   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0817 00:16:26.483217   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.18.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0
	I0817 00:16:26.482686   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.18.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0
	I0817 00:16:26.482686   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5
	I0817 00:16:26.482686   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns:1.6.7 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7
	I0817 00:16:26.483217   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause:3.2 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.2
	I0817 00:16:26.737865   94724 cache.go:108] acquiring lock: {Name:mkd2ccfcccd54ebe4281a29fc50019eba2cc1f05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:26.737865   94724 cache.go:108] acquiring lock: {Name:mke7605b7de248e6e75bc91c356ecb356c73b5df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:26.739391   94724 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I0817 00:16:26.739391   94724 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
	I0817 00:16:26.751717   94724 cache.go:108] acquiring lock: {Name:mkfe443c64d1a3dae7531e1da24945fa4d1b684d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:26.753345   94724 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 exists
	I0817 00:16:26.753755   94724 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.1.0" took 271.0585ms
	I0817 00:16:26.753755   94724 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 succeeded
	I0817 00:16:26.760237   94724 cache.go:108] acquiring lock: {Name:mkbd69c89f5d4341beed10f900f1632dd59716b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:26.760867   94724 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0817 00:16:26.760867   94724 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 274.7554ms
	I0817 00:16:26.761046   94724 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0817 00:16:26.766102   94724 cache.go:108] acquiring lock: {Name:mkb74bd87eb22929daebbdeb6bdaf2f5ca50ff9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:26.766761   94724 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
	I0817 00:16:26.766272   94724 cache.go:108] acquiring lock: {Name:mkcbba06c099fa67c03e9375ab41c3707a41a063 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:26.767466   94724 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 exists
	I0817 00:16:26.767466   94724 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.4" took 284.2384ms
	I0817 00:16:26.767466   94724 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 succeeded
	I0817 00:16:26.769130   94724 cache.go:108] acquiring lock: {Name:mka54931d59bc7476465d7645bc53541641c717b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:26.769130   94724 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
	I0817 00:16:26.771123   94724 cache.go:108] acquiring lock: {Name:mk41784f741770fcb4d267ff21f421a142a0b8bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:26.771123   94724 cache.go:108] acquiring lock: {Name:mka041b07b978ce108914addfeadad35aeffa0e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:26.771123   94724 cache.go:108] acquiring lock: {Name:mkb2e182aa78eb71eda4e329b13a042e8a646d8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:26.771123   94724 cache.go:116] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0 exists
	I0817 00:16:26.771123   94724 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.7
	I0817 00:16:26.771123   94724 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0817 00:16:26.771123   94724 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.4.3-0" took 288.2638ms
	I0817 00:16:26.771123   94724 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I0817 00:16:26.781770   94724 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Error response from daemon: reference does not exist
	I0817 00:16:26.805154   94724 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Error response from daemon: reference does not exist
	I0817 00:16:26.812153   94724 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Error response from daemon: reference does not exist
	I0817 00:16:26.825152   94724 image.go:175] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
	I0817 00:16:26.839097   94724 image.go:175] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
	I0817 00:16:26.857852   94724 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Error response from daemon: reference does not exist
	W0817 00:16:26.917076   94724 image.go:185] authn lookup for k8s.gcr.io/kube-controller-manager:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0817 00:16:27.025635   94724 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	W0817 00:16:27.025635   94724 image.go:185] authn lookup for k8s.gcr.io/kube-proxy:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0817 00:16:27.025842   94724 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 00:16:27.025842   94724 cache.go:205] Successfully downloaded all kic artifacts
	I0817 00:16:27.026687   94724 start.go:313] acquiring machines lock for stopped-upgrade-20210817001119-111344: {Name:mkc0ce63cc4ee4b9a1ad8cd27e4db2cb0b752a07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:16:27.027094   94724 start.go:317] acquired machines lock for "stopped-upgrade-20210817001119-111344" in 259.8µs
	I0817 00:16:27.027314   94724 start.go:93] Skipping create...Using existing machine configuration
	I0817 00:16:27.027443   94724 fix.go:55] fixHost starting: m01
	I0817 00:16:27.043161   94724 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210817001119-111344 --format={{.State.Status}}
	W0817 00:16:27.149154   94724 image.go:185] authn lookup for k8s.gcr.io/kube-apiserver:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:16:27.252954   94724 image.go:185] authn lookup for k8s.gcr.io/coredns:1.6.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:16:27.353229   94724 image.go:185] authn lookup for k8s.gcr.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:16:27.483231   94724 image.go:185] authn lookup for k8s.gcr.io/kube-scheduler:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0817 00:16:27.551584   94724 fix.go:108] recreateIfNeeded on stopped-upgrade-20210817001119-111344: state=Stopped err=<nil>
	W0817 00:16:27.551584   94724 fix.go:134] unexpected machine state, will restart: <nil>
	W0817 00:16:23.510079   76328 cli_runner.go:162] docker network inspect docker-flags-20210817001618-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 00:16:23.518041   76328 network_create.go:255] running [docker network inspect docker-flags-20210817001618-111344] to gather additional debugging logs...
	I0817 00:16:23.518408   76328 cli_runner.go:115] Run: docker network inspect docker-flags-20210817001618-111344
	W0817 00:16:24.043237   76328 cli_runner.go:162] docker network inspect docker-flags-20210817001618-111344 returned with exit code 1
	I0817 00:16:24.043600   76328 network_create.go:258] error running [docker network inspect docker-flags-20210817001618-111344]: docker network inspect docker-flags-20210817001618-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20210817001618-111344
	I0817 00:16:24.043600   76328 network_create.go:260] output of [docker network inspect docker-flags-20210817001618-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20210817001618-111344
	
	** /stderr **
	I0817 00:16:24.047696   76328 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 00:16:24.570356   76328 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006c61c0] misses:0}
	I0817 00:16:24.570606   76328 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:16:24.570735   76328 network_create.go:106] attempt to create docker network docker-flags-20210817001618-111344 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 00:16:24.581492   76328 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20210817001618-111344
	W0817 00:16:25.093529   76328 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20210817001618-111344 returned with exit code 1
	W0817 00:16:25.094203   76328 network_create.go:98] failed to create docker network docker-flags-20210817001618-111344 192.168.49.0/24, will retry: subnet is taken
	I0817 00:16:25.106114   76328 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006c61c0] amended:false}} dirty:map[] misses:0}
	I0817 00:16:25.106727   76328 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:16:25.121799   76328 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006c61c0] amended:true}} dirty:map[192.168.49.0:0xc0006c61c0 192.168.58.0:0xc00058a450] misses:0}
	I0817 00:16:25.121799   76328 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:16:25.121799   76328 network_create.go:106] attempt to create docker network docker-flags-20210817001618-111344 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0817 00:16:25.129600   76328 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20210817001618-111344
	I0817 00:16:25.912550   76328 network_create.go:90] docker network docker-flags-20210817001618-111344 192.168.58.0/24 created
	I0817 00:16:25.913063   76328 kic.go:106] calculated static IP "192.168.58.2" for the "docker-flags-20210817001618-111344" container
	I0817 00:16:25.929103   76328 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 00:16:26.446036   76328 cli_runner.go:115] Run: docker volume create docker-flags-20210817001618-111344 --label name.minikube.sigs.k8s.io=docker-flags-20210817001618-111344 --label created_by.minikube.sigs.k8s.io=true
	I0817 00:16:27.094373   76328 oci.go:102] Successfully created a docker volume docker-flags-20210817001618-111344
	I0817 00:16:27.105697   76328 cli_runner.go:115] Run: docker run --rm --name docker-flags-20210817001618-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20210817001618-111344 --entrypoint /usr/bin/test -v docker-flags-20210817001618-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 00:16:26.381234   57864 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:26.381634   57864 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55136 <nil> <nil>}
	I0817 00:16:26.381634   57864 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0817 00:16:26.812153   57864 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0817 00:16:26.812153   57864 ubuntu.go:71] root file system type: overlay
	I0817 00:16:26.812153   57864 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0817 00:16:26.819153   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:27.364668   57864 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:27.365359   57864 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55136 <nil> <nil>}
	I0817 00:16:27.365359   57864 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0817 00:16:27.722697   57864 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0817 00:16:27.729362   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:28.356886   57864 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:28.357895   57864 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55136 <nil> <nil>}
	I0817 00:16:28.357895   57864 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:16:27.553706   94724 out.go:177] * Restarting existing docker container for "stopped-upgrade-20210817001119-111344" ...
	I0817 00:16:27.560406   94724 cli_runner.go:115] Run: docker start stopped-upgrade-20210817001119-111344
	I0817 00:16:27.602882   94724 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0
	I0817 00:16:27.711354   94724 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0
	I0817 00:16:27.861576   94724 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0
	I0817 00:16:27.955634   94724 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7
	I0817 00:16:28.154859   94724 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0
	I0817 00:16:28.172464   94724 cache.go:162] opening:  \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.2
	I0817 00:16:28.330877   94724 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.2 exists
	I0817 00:16:28.330877   94724 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.2" took 1.8442472s
	I0817 00:16:28.331866   94724 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\pause_3.2 succeeded
	I0817 00:16:28.725716   94724 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7 exists
	I0817 00:16:28.726078   94724 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\coredns_1.6.7" took 2.2390712s
	I0817 00:16:28.726078   94724 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7 succeeded
	I0817 00:16:29.139486   94724 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0 exists
	I0817 00:16:29.139486   94724 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.18.0" took 2.6566993s
	I0817 00:16:29.140324   94724 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0 succeeded
	I0817 00:16:29.255533   94724 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0 exists
	I0817 00:16:29.255533   94724 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.18.0" took 2.7722107s
	I0817 00:16:29.255533   94724 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0 succeeded
	I0817 00:16:29.429191   94724 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0 exists
	I0817 00:16:29.429992   94724 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.18.0" took 2.9445152s
	I0817 00:16:29.430206   94724 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0 succeeded
	I0817 00:16:29.886461   94724 cache.go:157] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0 exists
	I0817 00:16:29.887237   94724 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "C:\\Users\\jenkins\\minikube-integration\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.18.0" took 3.4040299s
	I0817 00:16:29.887237   94724 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0 succeeded
	I0817 00:16:29.887463   94724 cache.go:88] Successfully saved all images to host disk.
	I0817 00:16:30.057002   94724 cli_runner.go:168] Completed: docker start stopped-upgrade-20210817001119-111344: (2.4965008s)
	I0817 00:16:30.064491   94724 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210817001119-111344 --format={{.State.Status}}
	I0817 00:16:30.617051   94724 kic.go:420] container "stopped-upgrade-20210817001119-111344" state is running.
	I0817 00:16:30.629234   94724 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210817001119-111344
	I0817 00:16:31.172972   94724 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\config.json ...
	I0817 00:16:31.176728   94724 machine.go:88] provisioning docker machine ...
	I0817 00:16:31.176922   94724 ubuntu.go:169] provisioning hostname "stopped-upgrade-20210817001119-111344"
	I0817 00:16:31.184906   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:31.716119   94724 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:31.716777   94724 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55139 <nil> <nil>}
	I0817 00:16:31.716777   94724 main.go:130] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20210817001119-111344 && echo "stopped-upgrade-20210817001119-111344" | sudo tee /etc/hostname
	I0817 00:16:32.028531   94724 main.go:130] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20210817001119-111344
	
	I0817 00:16:32.037554   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:30.586118   76328 cli_runner.go:168] Completed: docker run --rm --name docker-flags-20210817001618-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20210817001618-111344 --entrypoint /usr/bin/test -v docker-flags-20210817001618-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (3.4800161s)
	I0817 00:16:30.586118   76328 oci.go:106] Successfully prepared a docker volume docker-flags-20210817001618-111344
	I0817 00:16:30.586418   76328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:16:30.586523   76328 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 00:16:30.594756   76328 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20210817001618-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 00:16:30.603862   76328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W0817 00:16:31.198900   76328 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20210817001618-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0817 00:16:31.198900   76328 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20210817001618-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ������������������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	����   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	��������8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>�����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0817 00:16:31.520988   76328 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:64 SystemTime:2021-08-17 00:16:31.0969737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:16:31.527622   76328 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 00:16:32.391514   76328 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20210817001618-111344 --name docker-flags-20210817001618-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20210817001618-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20210817001618-111344 --network docker-flags-20210817001618-111344 --ip 192.168.58.2 --volume docker-flags-20210817001618-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 00:16:32.191514   57864 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-17 00:16:27.715189000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0817 00:16:32.191745   57864 machine.go:91] provisioned docker machine in 9.948962s
	I0817 00:16:32.191745   57864 client.go:171] LocalClient.Create took 31.3708712s
	I0817 00:16:32.191745   57864 start.go:168] duration metric: libmachine.API.Create for "pause-20210817001556-111344" took 31.3708712s
	I0817 00:16:32.191745   57864 start.go:267] post-start starting for "pause-20210817001556-111344" (driver="docker")
	I0817 00:16:32.191745   57864 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:16:32.201390   57864 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:16:32.206919   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:32.688577   57864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55136 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210817001556-111344\id_rsa Username:docker}
	I0817 00:16:32.888383   57864 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:16:32.906204   57864 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:16:32.906204   57864 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:16:32.906395   57864 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:16:32.906395   57864 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 00:16:32.906475   57864 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:16:32.906475   57864 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:16:32.907187   57864 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:16:32.922249   57864 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:16:32.956158   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:16:33.040723   57864 start.go:270] post-start completed in 848.9459ms
	I0817 00:16:33.052115   57864 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817001556-111344
	I0817 00:16:33.547133   57864 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\config.json ...
	I0817 00:16:33.561271   57864 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:16:33.566872   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:34.075802   57864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55136 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210817001556-111344\id_rsa Username:docker}
	I0817 00:16:34.242785   57864 start.go:129] duration metric: createHost completed in 33.4241633s
	I0817 00:16:34.242785   57864 start.go:80] releasing machines lock for "pause-20210817001556-111344", held for 33.4246614s
	I0817 00:16:34.250702   57864 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210817001556-111344
	I0817 00:16:34.760031   57864 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 00:16:34.766990   57864 ssh_runner.go:149] Run: systemctl --version
	I0817 00:16:34.768769   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:34.775934   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:35.313714   57864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55136 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210817001556-111344\id_rsa Username:docker}
	I0817 00:16:35.319969   57864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55136 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210817001556-111344\id_rsa Username:docker}
	I0817 00:16:35.668306   57864 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0817 00:16:35.746177   57864 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:16:35.801463   57864 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0817 00:16:35.808571   57864 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 00:16:35.877500   57864 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 00:16:35.986052   57864 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0817 00:16:32.503174   94724 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:32.503404   94724 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55139 <nil> <nil>}
	I0817 00:16:32.503660   94724 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-20210817001119-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20210817001119-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-20210817001119-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 00:16:32.746834   94724 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:16:32.746834   94724 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0817 00:16:32.747162   94724 ubuntu.go:177] setting up certificates
	I0817 00:16:32.747162   94724 provision.go:83] configureAuth start
	I0817 00:16:32.759239   94724 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210817001119-111344
	I0817 00:16:33.315921   94724 provision.go:138] copyHostCerts
	I0817 00:16:33.316460   94724 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0817 00:16:33.316567   94724 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0817 00:16:33.316993   94724 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0817 00:16:33.318718   94724 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0817 00:16:33.318852   94724 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0817 00:16:33.319288   94724 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0817 00:16:33.321310   94724 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0817 00:16:33.321498   94724 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0817 00:16:33.322006   94724 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0817 00:16:33.323482   94724 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.stopped-upgrade-20210817001119-111344 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-20210817001119-111344]
	I0817 00:16:33.541296   94724 provision.go:172] copyRemoteCerts
	I0817 00:16:33.550545   94724 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 00:16:33.563385   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:34.075922   94724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55139 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\stopped-upgrade-20210817001119-111344\id_rsa Username:docker}
	I0817 00:16:34.258115   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 00:16:34.314124   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 00:16:34.374644   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1281 bytes)
	I0817 00:16:34.450241   94724 provision.go:86] duration metric: configureAuth took 1.7030145s
	I0817 00:16:34.450383   94724 ubuntu.go:193] setting minikube options for container-runtime
	I0817 00:16:34.450888   94724 config.go:177] Loaded profile config "stopped-upgrade-20210817001119-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0817 00:16:34.453051   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:34.955495   94724 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:34.955495   94724 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55139 <nil> <nil>}
	I0817 00:16:34.955495   94724 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0817 00:16:35.248288   94724 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0817 00:16:35.248446   94724 ubuntu.go:71] root file system type: overlay
	I0817 00:16:35.248930   94724 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0817 00:16:35.262382   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:35.788855   94724 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:35.789188   94724 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55139 <nil> <nil>}
	I0817 00:16:35.789351   94724 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0817 00:16:36.196714   94724 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0817 00:16:36.206269   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:36.701710   94724 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:36.701710   94724 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55139 <nil> <nil>}
	I0817 00:16:36.701710   94724 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:16:36.494688   57864 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0817 00:16:36.894716   57864 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:16:36.953915   57864 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:16:37.376929   57864 ssh_runner.go:149] Run: sudo systemctl start docker
	I0817 00:16:37.417914   57864 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:16:37.821361   57864 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:16:34.961254   76328 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20210817001618-111344 --name docker-flags-20210817001618-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20210817001618-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20210817001618-111344 --network docker-flags-20210817001618-111344 --ip 192.168.58.2 --volume docker-flags-20210817001618-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: (2.5687649s)
	I0817 00:16:34.968490   76328 cli_runner.go:115] Run: docker container inspect docker-flags-20210817001618-111344 --format={{.State.Running}}
	I0817 00:16:35.499339   76328 cli_runner.go:115] Run: docker container inspect docker-flags-20210817001618-111344 --format={{.State.Status}}
	I0817 00:16:36.035828   76328 cli_runner.go:115] Run: docker exec docker-flags-20210817001618-111344 stat /var/lib/dpkg/alternatives/iptables
	I0817 00:16:36.871001   76328 oci.go:278] the created container "docker-flags-20210817001618-111344" has a running status.
	I0817 00:16:36.871001   76328 kic.go:210] Creating ssh key for kic: C:\Users\jenkins\minikube-integration\.minikube\machines\docker-flags-20210817001618-111344\id_rsa...
	I0817 00:16:37.557149   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\machines\docker-flags-20210817001618-111344\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0817 00:16:37.563204   76328 kic_runner.go:188] docker (temp): C:\Users\jenkins\minikube-integration\.minikube\machines\docker-flags-20210817001618-111344\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 00:16:38.033537   57864 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	I0817 00:16:38.045555   57864 cli_runner.go:115] Run: docker exec -t pause-20210817001556-111344 dig +short host.docker.internal
	I0817 00:16:38.837823   57864 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0817 00:16:38.845819   57864 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0817 00:16:38.865391   57864 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:16:38.945434   57864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20210817001556-111344
	I0817 00:16:39.442567   57864 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:16:39.457349   57864 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:16:39.664668   57864 docker.go:535] Got preloaded images: 
	I0817 00:16:39.665669   57864 docker.go:541] k8s.gcr.io/kube-apiserver:v1.21.3 wasn't preloaded
	I0817 00:16:39.674623   57864 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:16:39.736422   57864 ssh_runner.go:149] Run: which lz4
	I0817 00:16:39.758332   57864 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 00:16:39.777169   57864 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0817 00:16:39.777569   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (504826016 bytes)
	I0817 00:16:41.043102   94724 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-08-17 00:13:11.140803000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-17 00:16:36.175189000 +0000
	@@ -5,9 +5,12 @@
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -23,7 +26,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	
	I0817 00:16:41.043102   94724 machine.go:91] provisioned docker machine in 9.8658056s
	I0817 00:16:41.043102   94724 start.go:267] post-start starting for "stopped-upgrade-20210817001119-111344" (driver="docker")
	I0817 00:16:41.043102   94724 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:16:41.047157   94724 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:16:41.059855   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:41.549236   94724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55139 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\stopped-upgrade-20210817001119-111344\id_rsa Username:docker}
	I0817 00:16:41.756230   94724 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:16:41.781769   94724 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:16:41.781937   94724 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:16:41.781937   94724 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:16:41.781937   94724 info.go:137] Remote host: Ubuntu 19.10
	I0817 00:16:41.782078   94724 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:16:41.782296   94724 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:16:41.783252   94724 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:16:41.793817   94724 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:16:41.851374   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:16:41.942442   94724 start.go:270] post-start completed in 898.971ms
	I0817 00:16:41.954905   94724 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:16:41.963199   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:42.448259   94724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55139 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\stopped-upgrade-20210817001119-111344\id_rsa Username:docker}
	I0817 00:16:38.553280   76328 cli_runner.go:115] Run: docker container inspect docker-flags-20210817001618-111344 --format={{.State.Status}}
	I0817 00:16:39.063363   76328 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 00:16:39.063363   76328 kic_runner.go:115] Args: [docker exec --privileged docker-flags-20210817001618-111344 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 00:16:39.744403   76328 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins\minikube-integration\.minikube\machines\docker-flags-20210817001618-111344\id_rsa...
	I0817 00:16:40.433369   76328 cli_runner.go:115] Run: docker container inspect docker-flags-20210817001618-111344 --format={{.State.Status}}
	I0817 00:16:40.929065   76328 machine.go:88] provisioning docker machine ...
	I0817 00:16:40.929210   76328 ubuntu.go:169] provisioning hostname "docker-flags-20210817001618-111344"
	I0817 00:16:40.936925   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:16:41.447585   76328 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:41.464508   76328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55144 <nil> <nil>}
	I0817 00:16:41.464508   76328 main.go:130] libmachine: About to run SSH command:
	sudo hostname docker-flags-20210817001618-111344 && echo "docker-flags-20210817001618-111344" | sudo tee /etc/hostname
	I0817 00:16:41.846741   76328 main.go:130] libmachine: SSH cmd err, output: <nil>: docker-flags-20210817001618-111344
	
	I0817 00:16:41.853083   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:16:42.346891   76328 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:42.347643   76328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55144 <nil> <nil>}
	I0817 00:16:42.347643   76328 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdocker-flags-20210817001618-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 docker-flags-20210817001618-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 docker-flags-20210817001618-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 00:16:42.643399   76328 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:16:42.643399   76328 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0817 00:16:42.643399   76328 ubuntu.go:177] setting up certificates
	I0817 00:16:42.643399   76328 provision.go:83] configureAuth start
	I0817 00:16:42.650999   76328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20210817001618-111344
	I0817 00:16:43.157328   76328 provision.go:138] copyHostCerts
	I0817 00:16:43.157328   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins\minikube-integration\.minikube/ca.pem
	I0817 00:16:43.157328   76328 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0817 00:16:43.157328   76328 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0817 00:16:43.157328   76328 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0817 00:16:43.158160   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins\minikube-integration\.minikube/cert.pem
	I0817 00:16:43.159168   76328 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0817 00:16:43.159168   76328 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0817 00:16:43.159168   76328 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0817 00:16:43.160163   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins\minikube-integration\.minikube/key.pem
	I0817 00:16:43.160163   76328 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0817 00:16:43.160163   76328 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0817 00:16:43.160163   76328 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0817 00:16:43.162170   76328 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.docker-flags-20210817001618-111344 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube docker-flags-20210817001618-111344]
	I0817 00:16:42.640430   94724 fix.go:57] fixHost completed within 15.6125229s
	I0817 00:16:42.641019   94724 start.go:80] releasing machines lock for "stopped-upgrade-20210817001119-111344", held for 15.6133311s
	I0817 00:16:42.649286   94724 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210817001119-111344
	I0817 00:16:43.145184   94724 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 00:16:43.152409   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:43.158160   94724 ssh_runner.go:149] Run: systemctl --version
	I0817 00:16:43.164161   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:43.667857   94724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55139 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\stopped-upgrade-20210817001119-111344\id_rsa Username:docker}
	I0817 00:16:43.671795   94724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55139 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\stopped-upgrade-20210817001119-111344\id_rsa Username:docker}
	I0817 00:16:43.997684   94724 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0817 00:16:44.070671   94724 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:16:44.149587   94724 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0817 00:16:44.159841   94724 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 00:16:44.231701   94724 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 00:16:44.308702   94724 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0817 00:16:44.689512   94724 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0817 00:16:44.968683   94724 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:16:45.056349   94724 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:16:45.409108   94724 ssh_runner.go:149] Run: sudo systemctl start docker
	I0817 00:16:45.481083   94724 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:16:45.887747   94724 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:16:46.148721   94724 out.go:204] * Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
	I0817 00:16:46.159563   94724 cli_runner.go:115] Run: docker exec -t stopped-upgrade-20210817001119-111344 dig +short host.docker.internal
	I0817 00:16:47.013246   94724 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0817 00:16:47.025382   94724 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0817 00:16:47.054111   94724 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:16:47.159486   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:16:43.506569   76328 provision.go:172] copyRemoteCerts
	I0817 00:16:43.513248   76328 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 00:16:43.520694   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:16:44.038971   76328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55144 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\docker-flags-20210817001618-111344\id_rsa Username:docker}
	I0817 00:16:44.260410   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0817 00:16:44.261114   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 00:16:44.371406   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0817 00:16:44.371625   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 00:16:44.495732   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0817 00:16:44.496255   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1273 bytes)
	I0817 00:16:44.658479   76328 provision.go:86] duration metric: configureAuth took 2.0148388s
	I0817 00:16:44.658479   76328 ubuntu.go:193] setting minikube options for container-runtime
	I0817 00:16:44.659065   76328 config.go:177] Loaded profile config "docker-flags-20210817001618-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:16:44.665122   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:16:45.207873   76328 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:45.208318   76328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55144 <nil> <nil>}
	I0817 00:16:45.208318   76328 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0817 00:16:45.548023   76328 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0817 00:16:45.548023   76328 ubuntu.go:71] root file system type: overlay
	I0817 00:16:45.548575   76328 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0817 00:16:45.558238   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:16:46.040754   76328 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:46.041171   76328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55144 <nil> <nil>}
	I0817 00:16:46.041404   76328 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="FOO=BAR"
	Environment="BAZ=BAT"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0817 00:16:46.391225   76328 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=FOO=BAR
	Environment=BAZ=BAT
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0817 00:16:46.404450   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:16:46.901492   76328 main.go:130] libmachine: Using SSH client type: native
	I0817 00:16:46.901950   76328 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55144 <nil> <nil>}
	I0817 00:16:46.902174   76328 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:16:47.619773   94724 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I0817 00:16:47.625937   94724 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:16:48.002553   94724 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.0
	k8s.gcr.io/kube-apiserver:v1.18.0
	k8s.gcr.io/kube-controller-manager:v1.18.0
	k8s.gcr.io/kube-scheduler:v1.18.0
	kubernetesui/dashboard:v2.0.0-rc6
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	kindest/kindnetd:0.5.3
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	
	-- /stdout --
	I0817 00:16:48.002553   94724 docker.go:541] gcr.io/k8s-minikube/storage-provisioner:v5 wasn't preloaded
	I0817 00:16:48.002553   94724 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0817 00:16:48.043789   94724 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:16:48.053223   94724 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0817 00:16:48.071225   94724 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0817 00:16:48.077014   94724 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0817 00:16:48.088758   94724 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
	I0817 00:16:48.093738   94724 image.go:175] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0817 00:16:48.096819   94724 image.go:175] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
	I0817 00:16:48.139492   94724 image.go:175] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
	I0817 00:16:48.159483   94724 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Error response from daemon: reference does not exist
	I0817 00:16:48.168047   94724 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
	I0817 00:16:48.175038   94724 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
	I0817 00:16:48.188747   94724 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I0817 00:16:48.198274   94724 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Error response from daemon: reference does not exist
	I0817 00:16:48.210320   94724 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Error response from daemon: reference does not exist
	W0817 00:16:48.217301   94724 image.go:185] authn lookup for k8s.gcr.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0817 00:16:48.217893   94724 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Error response from daemon: reference does not exist
	I0817 00:16:48.223958   94724 image.go:175] daemon lookup for docker.io/kubernetesui/dashboard:v2.1.0: Error response from daemon: reference does not exist
	I0817 00:16:48.239523   94724 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.7
	I0817 00:16:48.242011   94724 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0817 00:16:48.257031   94724 image.go:175] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
	I0817 00:16:48.287152   94724 image.go:175] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: reference does not exist
	W0817 00:16:48.335128   94724 image.go:185] authn lookup for k8s.gcr.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:16:48.450139   94724 image.go:185] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:16:48.558274   94724 image.go:185] authn lookup for k8s.gcr.io/kube-proxy:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:16:48.688642   94724 image.go:185] authn lookup for k8s.gcr.io/kube-scheduler:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0817 00:16:48.729470   94724 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0817 00:16:48.794474   94724 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.2
	W0817 00:16:48.800365   94724 image.go:185] authn lookup for k8s.gcr.io/kube-apiserver:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:16:48.918910   94724 image.go:185] authn lookup for k8s.gcr.io/kube-controller-manager:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0817 00:16:49.004195   94724 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0
	W0817 00:16:49.027080   94724 image.go:185] authn lookup for docker.io/kubernetesui/dashboard:v2.1.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0817 00:16:49.110223   94724 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0
	I0817 00:16:49.132810   94724 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0817 00:16:49.145881   94724 image.go:185] authn lookup for k8s.gcr.io/coredns:1.6.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0817 00:16:49.263049   94724 image.go:185] authn lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0817 00:16:49.263996   94724 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0
	I0817 00:16:49.419340   94724 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0
	I0817 00:16:49.638256   94724 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7
	I0817 00:16:49.914693   94724 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0817 00:16:49.914928   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5
	I0817 00:16:49.914928   94724 docker.go:236] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:16:49.923483   94724 ssh_runner.go:149] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:16:50.003586   94724 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0817 00:16:50.023275   94724 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0817 00:16:50.440175   94724 cache_images.go:106] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0817 00:16:50.440175   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0817 00:16:50.440175   94724 docker.go:236] Removing image: docker.io/kubernetesui/dashboard:v2.1.0
	I0817 00:16:50.448153   94724 ssh_runner.go:149] Run: docker rmi docker.io/kubernetesui/dashboard:v2.1.0
	I0817 00:16:50.456783   94724 cache_images.go:276] Loading image from: C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5
	I0817 00:16:50.468544   94724 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0817 00:16:50.491221   94724 cache_images.go:106] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0817 00:16:50.491221   94724 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0817 00:16:50.491399   94724 docker.go:236] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0817 00:16:50.493035   94724 ssh_runner.go:149] Run: docker rmi docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0817 00:16:50.688798   94724 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0817 00:16:50.689047   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0817 00:16:50.689181   94724 cache_images.go:276] Loading image from: C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0817 00:16:50.700096   94724 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0
	I0817 00:16:50.772429   94724 cache_images.go:276] Loading image from: C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0817 00:16:50.776905   94724 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0817 00:16:50.777100   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (67993600 bytes)
	I0817 00:16:50.781159   94724 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0817 00:16:51.262946   94724 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0817 00:16:51.263199   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (16022528 bytes)
	I0817 00:16:52.385232   94724 docker.go:203] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0817 00:16:52.391718   94724 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/storage-provisioner_v5
	I0817 00:16:56.200360   94724 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/storage-provisioner_v5: (3.8079995s)
	I0817 00:16:56.200360   94724 cache_images.go:305] Transferred and loaded C:\Users\jenkins\minikube-integration\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I0817 00:16:56.915917   94724 docker.go:203] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0817 00:16:56.923589   94724 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0817 00:16:53.635775   76328 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-17 00:16:46.378371000 +0000
	@@ -1,30 +1,34 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=FOO=BAR
	+Environment=BAZ=BAT
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +36,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0817 00:16:53.635999   76328 machine.go:91] provisioned docker machine in 12.706306s
	I0817 00:16:53.635999   76328 client.go:171] LocalClient.Create took 30.6353643s
	I0817 00:16:53.635999   76328 start.go:168] duration metric: libmachine.API.Create for "docker-flags-20210817001618-111344" took 30.6355635s
	I0817 00:16:53.635999   76328 start.go:267] post-start starting for "docker-flags-20210817001618-111344" (driver="docker")
	I0817 00:16:53.635999   76328 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:16:53.645877   76328 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:16:53.652182   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:16:54.149158   76328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55144 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\docker-flags-20210817001618-111344\id_rsa Username:docker}
	I0817 00:16:54.405676   76328 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:16:54.441257   76328 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:16:54.441350   76328 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:16:54.441350   76328 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:16:54.441350   76328 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 00:16:54.441350   76328 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:16:54.441750   76328 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:16:54.442342   76328 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:16:54.442342   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> /etc/ssl/certs/1113442.pem
	I0817 00:16:54.450268   76328 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:16:54.509532   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:16:54.615600   76328 start.go:270] post-start completed in 979.5635ms
	I0817 00:16:54.630888   76328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20210817001618-111344
	I0817 00:16:55.140997   76328 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\config.json ...
	I0817 00:16:55.153046   76328 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:16:55.158751   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:16:55.645288   76328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55144 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\docker-flags-20210817001618-111344\id_rsa Username:docker}
	I0817 00:16:55.845340   76328 start.go:129] duration metric: createHost completed in 32.8478416s
	I0817 00:16:55.845457   76328 start.go:80] releasing machines lock for "docker-flags-20210817001618-111344", held for 32.8483552s
	I0817 00:16:55.852996   76328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20210817001618-111344
	I0817 00:16:56.347351   76328 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 00:16:56.353867   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:16:56.354768   76328 ssh_runner.go:149] Run: systemctl --version
	I0817 00:16:56.360932   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:16:56.876837   76328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55144 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\docker-flags-20210817001618-111344\id_rsa Username:docker}
	I0817 00:16:56.879773   76328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55144 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\docker-flags-20210817001618-111344\id_rsa Username:docker}
	I0817 00:16:57.221522   76328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0817 00:16:57.299899   76328 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:16:57.356759   76328 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0817 00:16:57.366257   76328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 00:16:57.423826   76328 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 00:16:57.514567   76328 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0817 00:16:57.992342   76328 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0817 00:16:58.385334   76328 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:16:58.495829   76328 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:16:58.946338   76328 ssh_runner.go:149] Run: sudo systemctl start docker
	I0817 00:16:59.063597   76328 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:16:59.297087   76328 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:16:59.550289   76328 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	I0817 00:16:59.552443   76328 out.go:177]   - opt debug
	I0817 00:16:59.555380   76328 out.go:177]   - opt icc=true
	I0817 00:16:59.557275   76328 out.go:177]   - env FOO=BAR
	I0817 00:17:01.440274   94724 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/metrics-scraper_v1.0.4: (4.5156773s)
	I0817 00:17:01.440274   94724 cache_images.go:305] Transferred and loaded C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 from cache
	I0817 00:16:59.559680   76328 out.go:177]   - env BAZ=BAT
	I0817 00:16:59.567228   76328 cli_runner.go:115] Run: docker exec -t docker-flags-20210817001618-111344 dig +short host.docker.internal
	I0817 00:17:00.417795   76328 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0817 00:17:00.430266   76328 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0817 00:17:00.466289   76328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:17:00.550167   76328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" docker-flags-20210817001618-111344
	I0817 00:17:01.083530   76328 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:17:01.090707   76328 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:17:01.252844   76328 docker.go:535] Got preloaded images: 
	I0817 00:17:01.252844   76328 docker.go:541] k8s.gcr.io/kube-apiserver:v1.21.3 wasn't preloaded
	I0817 00:17:01.264312   76328 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:17:01.340190   76328 ssh_runner.go:149] Run: which lz4
	I0817 00:17:01.372117   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0817 00:17:01.380495   76328 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 00:17:01.430209   76328 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0817 00:17:01.430459   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (504826016 bytes)
	I0817 00:17:03.733729   94724 docker.go:203] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
	I0817 00:17:03.740449   94724 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/dashboard_v2.1.0
	I0817 00:17:17.946695   94724 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/dashboard_v2.1.0: (14.2057061s)
	I0817 00:17:17.946695   94724 cache_images.go:305] Transferred and loaded C:\Users\jenkins\minikube-integration\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 from cache
	I0817 00:17:17.946981   94724 cache_images.go:113] Successfully loaded all cached images
	I0817 00:17:17.946981   94724 cache_images.go:82] LoadImages completed in 29.9432902s
	I0817 00:17:17.954094   94724 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0817 00:17:18.196456   94724 cni.go:93] Creating CNI manager for ""
	I0817 00:17:18.196456   94724 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:17:18.196743   94724 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 00:17:18.196743   94724 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-20210817001119-111344 NodeName:stopped-upgrade-20210817001119-111344 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCA
File:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 00:17:18.197017   94724 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "stopped-upgrade-20210817001119-111344"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 00:17:18.197333   94724 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=stopped-upgrade-20210817001119-111344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210817001119-111344 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 00:17:18.209647   94724 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
	I0817 00:17:18.260247   94724 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 00:17:18.267771   94724 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 00:17:18.311269   94724 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0817 00:17:18.376040   94724 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 00:17:18.440634   94724 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0817 00:17:18.510586   94724 ssh_runner.go:149] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
	I0817 00:17:18.537506   94724 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:17:18.609364   94724 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344 for IP: 172.17.0.2
	I0817 00:17:18.610062   94724 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0817 00:17:18.610062   94724 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0817 00:17:18.610490   94724 certs.go:293] skipping minikube-user signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\client.key
	I0817 00:17:18.610749   94724 certs.go:297] generating minikube signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.key.7b749c5f
	I0817 00:17:18.610986   94724 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 00:17:18.880770   94724 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.crt.7b749c5f ...
	I0817 00:17:18.880770   94724 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.crt.7b749c5f: {Name:mk05d5531b45a1d0ced11bbac7de425ad03eaa3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:17:18.882595   94724 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.key.7b749c5f ...
	I0817 00:17:18.882595   94724 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.key.7b749c5f: {Name:mkf0be0c2d90c5fe1fce89152d95c1554ccc9114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:17:18.884492   94724 certs.go:308] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.crt.7b749c5f -> C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.crt
	I0817 00:17:18.896973   94724 certs.go:312] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.key.7b749c5f -> C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.key
	I0817 00:17:18.898980   94724 certs.go:293] skipping aggregator signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\proxy-client.key
	I0817 00:17:18.900087   94724 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem (1338 bytes)
	W0817 00:17:18.901369   94724 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344_empty.pem, impossibly tiny 0 bytes
	I0817 00:17:18.901473   94724 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0817 00:17:18.901729   94724 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0817 00:17:18.902105   94724 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0817 00:17:18.902570   94724 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0817 00:17:18.902570   94724 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem (1708 bytes)
	I0817 00:17:18.905274   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 00:17:18.999441   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 00:17:19.088167   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 00:17:19.176890   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\stopped-upgrade-20210817001119-111344\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 00:17:19.240271   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 00:17:19.339147   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 00:17:19.425022   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 00:17:19.507777   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 00:17:19.589649   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 00:17:19.697201   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem --> /usr/share/ca-certificates/111344.pem (1338 bytes)
	I0817 00:17:19.820705   94724 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /usr/share/ca-certificates/1113442.pem (1708 bytes)
	I0817 00:17:19.945907   94724 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0817 00:17:20.020964   94724 ssh_runner.go:149] Run: openssl version
	I0817 00:17:20.074718   94724 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 00:17:20.146791   94724 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:17:20.177276   94724 certs.go:419] hashing: -rwxr-xr-x 1 root root 1111 Aug 16 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:17:20.188505   94724 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:17:20.231348   94724 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 00:17:20.291932   94724 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111344.pem && ln -fs /usr/share/ca-certificates/111344.pem /etc/ssl/certs/111344.pem"
	I0817 00:17:20.360820   94724 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/111344.pem
	I0817 00:17:20.391002   94724 certs.go:419] hashing: -rwxr-xr-x 1 root root 1338 Aug 16 23:23 /usr/share/ca-certificates/111344.pem
	I0817 00:17:20.401332   94724 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111344.pem
	I0817 00:17:20.444102   94724 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/111344.pem /etc/ssl/certs/51391683.0"
	I0817 00:17:20.497789   94724 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1113442.pem && ln -fs /usr/share/ca-certificates/1113442.pem /etc/ssl/certs/1113442.pem"
	I0817 00:17:20.560052   94724 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1113442.pem
	I0817 00:17:20.588435   94724 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 23:23 /usr/share/ca-certificates/1113442.pem
	I0817 00:17:20.596322   94724 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1113442.pem
	I0817 00:17:20.657012   94724 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1113442.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 00:17:20.726520   94724 kubeadm.go:390] StartCluster: {Name:stopped-upgrade-20210817001119-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210817001119-111344 Namespace: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:17:20.733921   94724 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:17:21.007005   94724 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 00:17:21.057171   94724 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 00:17:21.058959   94724 kubeadm.go:600] restartCluster start
	I0817 00:17:21.067865   94724 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 00:17:21.121645   94724 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:17:21.128984   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:17:21.618293   94724 kubeconfig.go:117] verify returned: extract IP: "stopped-upgrade-20210817001119-111344" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:17:21.618941   94724 kubeconfig.go:128] "stopped-upgrade-20210817001119-111344" context is missing from C:\Users\jenkins\minikube-integration\kubeconfig - will repair!
	I0817 00:17:21.619663   94724 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:17:21.632928   94724 kapi.go:59] client config for stopped-upgrade-20210817001119-111344: &rest.Config{Host:"https://127.0.0.1:55137", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\stopped-upgrade-20210817001119-111344/client.crt", KeyFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\stopped-upgrade-20210817001119-111344/client.key", CAFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyDat
a:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x14d7000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 00:17:21.652232   94724 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 00:17:21.695891   94724 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-17 00:14:54.821356000 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-17 00:17:18.500158000 +0000
	@@ -23,16 +23,52 @@
	   certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+controllerManager:
	+  extraArgs:
	+    allocate-node-cidrs: "true"
	+    leader-elect: "false"
	+scheduler:
	+  extraArgs:
	+    leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	-controlPlaneEndpoint: 172.17.0.2:8443
	+controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	+    extraArgs:
	+      proxy-refresh-interval: "70000"
	 kubernetesVersion: v1.18.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	   serviceSubnet: 10.96.0.0/12
	+---
	+apiVersion: kubelet.config.k8s.io/v1beta1
	+kind: KubeletConfiguration
	+authentication:
	+  x509:
	+    clientCAFile: /var/lib/minikube/certs/ca.crt
	+cgroupDriver: cgroupfs
	+clusterDomain: "cluster.local"
	+# disable disk resource management by default
	+imageGCHighThresholdPercent: 100
	+evictionHard:
	+  nodefs.available: "0%!"(MISSING)
	+  nodefs.inodesFree: "0%!"(MISSING)
	+  imagefs.available: "0%!"(MISSING)
	+failSwapOn: false
	+staticPodPath: /etc/kubernetes/manifests
	+---
	+apiVersion: kubeproxy.config.k8s.io/v1alpha1
	+kind: KubeProxyConfiguration
	+clusterCIDR: "10.244.0.0/16"
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0817 00:17:21.695891   94724 kubeadm.go:1032] stopping kube-system containers ...
	I0817 00:17:21.717755   94724 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:17:21.915930   94724 docker.go:367] Stopping containers: [198008788b87 2af10de3bff0 ccc5c1975f3e 90720ba7f59f 4d3dcc17996d 71b387ab00d0 ee3aa5f5db60 b358d77f37d2 30fb85ddd616 b1c3fba68b4c 7a43cdf5f68b 1cd4a638b1b8 0d5269c4aa84 6b07cb79d6e4 150878ef645b 5476fdc77b95 88960f7d5222]
	I0817 00:17:21.921259   94724 ssh_runner.go:149] Run: docker stop 198008788b87 2af10de3bff0 ccc5c1975f3e 90720ba7f59f 4d3dcc17996d 71b387ab00d0 ee3aa5f5db60 b358d77f37d2 30fb85ddd616 b1c3fba68b4c 7a43cdf5f68b 1cd4a638b1b8 0d5269c4aa84 6b07cb79d6e4 150878ef645b 5476fdc77b95 88960f7d5222
	I0817 00:17:22.214901   94724 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 00:17:22.296554   94724 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 00:17:22.346421   94724 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5594 Aug 17 00:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5630 Aug 17 00:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2066 Aug 17 00:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5574 Aug 17 00:15 /etc/kubernetes/scheduler.conf
	
	I0817 00:17:22.356513   94724 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 00:17:22.398285   94724 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:17:22.412835   94724 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0817 00:17:22.474295   94724 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 00:17:22.518963   94724 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:17:22.528948   94724 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0817 00:17:22.577201   94724 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 00:17:22.613434   94724 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:17:22.628799   94724 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 00:17:22.699381   94724 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 00:17:22.750095   94724 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:17:22.759242   94724 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 00:17:22.804657   94724 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 00:17:22.844867   94724 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 00:17:22.844990   94724 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:17:23.174125   94724 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:17:27.169559   94724 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.9951778s)
	I0817 00:17:27.169740   94724 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:17:28.117915   94724 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:17:28.471625   94724 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:17:28.870505   94724 api_server.go:50] waiting for apiserver process to appear ...
	I0817 00:17:28.878991   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:29.472720   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:29.964273   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:30.467662   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:30.967663   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:31.468469   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:31.967935   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:32.461908   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:32.967941   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:33.467334   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:33.966961   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:34.470541   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:34.978394   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:35.469932   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:35.966985   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:36.470086   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:36.971399   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:39.576935   57864 docker.go:500] Took 59.830258 seconds to copy over tarball
	I0817 00:17:39.584772   57864 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 00:17:37.475402   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:37.964618   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:38.464606   94724 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:17:38.858238   94724 api_server.go:70] duration metric: took 9.9873531s to wait for apiserver process to appear ...
	I0817 00:17:38.858238   94724 api_server.go:86] waiting for apiserver healthz status ...
	I0817 00:17:38.858238   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:17:38.872723   94724 api_server.go:255] stopped: https://127.0.0.1:55137/healthz: Get "https://127.0.0.1:55137/healthz": EOF
	I0817 00:17:39.373321   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:17:44.374999   94724 api_server.go:255] stopped: https://127.0.0.1:55137/healthz: Get "https://127.0.0.1:55137/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 00:17:44.874822   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:17:49.878191   94724 api_server.go:255] stopped: https://127.0.0.1:55137/healthz: Get "https://127.0.0.1:55137/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 00:17:50.373623   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:17:55.375632   94724 api_server.go:255] stopped: https://127.0.0.1:55137/healthz: Get "https://127.0.0.1:55137/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 00:17:55.874083   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:17:53.518321   76328 docker.go:500] Took 52.143671 seconds to copy over tarball
	I0817 00:17:53.533361   76328 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 00:17:59.544389   57864 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (19.9588586s)
	I0817 00:17:59.544389   57864 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0817 00:18:00.085435   57864 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:18:00.115471   57864 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3152 bytes)
	I0817 00:18:00.182820   57864 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:18:00.477304   57864 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0817 00:17:59.811523   94724 api_server.go:265] https://127.0.0.1:55137/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 00:17:59.812182   94724 api_server.go:101] status: https://127.0.0.1:55137/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 00:17:59.873858   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:17:59.938844   94724 api_server.go:265] https://127.0.0.1:55137/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:17:59.938966   94724 api_server.go:101] status: https://127.0.0.1:55137/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:18:00.374202   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:18:00.470768   94724 api_server.go:265] https://127.0.0.1:55137/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:18:00.470768   94724 api_server.go:101] status: https://127.0.0.1:55137/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:18:00.873914   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:18:02.457868   94724 api_server.go:265] https://127.0.0.1:55137/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:18:02.970401   94724 api_server.go:101] status: https://127.0.0.1:55137/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:18:03.373829   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:18:05.396583   94724 api_server.go:265] https://127.0.0.1:55137/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:18:05.396583   94724 api_server.go:101] status: https://127.0.0.1:55137/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:18:05.874691   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:18:06.002816   94724 api_server.go:265] https://127.0.0.1:55137/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:18:06.002972   94724 api_server.go:101] status: https://127.0.0.1:55137/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:18:06.374703   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:18:04.883173   76328 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.3493806s)
	I0817 00:18:04.883404   76328 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0817 00:18:05.288384   76328 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:18:05.315126   76328 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3152 bytes)
	I0817 00:18:05.370985   76328 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:18:05.564479   76328 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0817 00:18:07.852074   94724 api_server.go:265] https://127.0.0.1:55137/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:18:07.852598   94724 api_server.go:101] status: https://127.0.0.1:55137/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:18:07.874031   94724 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55137/healthz ...
	I0817 00:18:07.927272   94724 api_server.go:265] https://127.0.0.1:55137/healthz returned 200:
	ok
	I0817 00:18:07.963210   94724 api_server.go:139] control plane version: v1.18.0
	I0817 00:18:07.963480   94724 api_server.go:129] duration metric: took 29.1038667s to wait for apiserver health ...
	I0817 00:18:07.963480   94724 cni.go:93] Creating CNI manager for ""
	I0817 00:18:07.963480   94724 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:18:07.963480   94724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 00:18:08.062903   94724 system_pods.go:59] 9 kube-system pods found
	I0817 00:18:08.063147   94724 system_pods.go:61] "coredns-66bff467f8-2zqdn" [fa8ebea5-4c71-46fe-970e-76d34d9e2b2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 00:18:08.063147   94724 system_pods.go:61] "coredns-66bff467f8-6d9nh" [7c6ba66b-a666-4d7d-af75-8ef87a624dd4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 00:18:08.063147   94724 system_pods.go:61] "etcd-stopped-upgrade-20210817001119-111344" [563656c4-e5a6-4375-bfdb-593b77d6318a] Running
	I0817 00:18:08.063147   94724 system_pods.go:61] "kindnet-ngdjs" [ccfaa503-d095-40cf-aa5c-acf1be35dce4] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0817 00:18:08.063147   94724 system_pods.go:61] "kube-apiserver-stopped-upgrade-20210817001119-111344" [157e6560-973c-43e0-b16f-66a7760fbb51] Running
	I0817 00:18:08.063147   94724 system_pods.go:61] "kube-controller-manager-stopped-upgrade-20210817001119-111344" [b1fc0d1a-112f-45a7-b948-a248c4d1cddc] Running
	I0817 00:18:08.063147   94724 system_pods.go:61] "kube-proxy-zbbt2" [902057b3-d4d6-4fe5-a221-f8f1d206d1f5] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 00:18:08.063147   94724 system_pods.go:61] "kube-scheduler-stopped-upgrade-20210817001119-111344" [c3f24f54-2e24-4e08-8ebf-6269aed25818] Running
	I0817 00:18:08.063147   94724 system_pods.go:61] "storage-provisioner" [603f71ef-a5a9-4806-9a61-0c66ccd1a1b3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 00:18:08.063147   94724 system_pods.go:74] duration metric: took 99.6629ms to wait for pod list to return data ...
	I0817 00:18:08.063147   94724 node_conditions.go:102] verifying NodePressure condition ...
	I0817 00:18:08.074054   94724 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0817 00:18:08.074054   94724 node_conditions.go:123] node cpu capacity is 4
	I0817 00:18:08.074054   94724 node_conditions.go:105] duration metric: took 10.907ms to run NodePressure ...
	I0817 00:18:08.074270   94724 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:18:09.653603   94724 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.579273s)
	I0817 00:18:09.653835   94724 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 00:18:09.799394   94724 ops.go:34] apiserver oom_adj: -16
	I0817 00:18:09.799394   94724 kubeadm.go:604] restartCluster took 48.7385824s
	I0817 00:18:09.799394   94724 kubeadm.go:392] StartCluster complete in 49.071009s
	I0817 00:18:09.799394   94724 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:09.799921   94724 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:18:09.801881   94724 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:09.828256   94724 kapi.go:59] client config for stopped-upgrade-20210817001119-111344: &rest.Config{Host:"https://127.0.0.1:55137", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\stopped-upgrade-20210817001119-111344\\client.crt", KeyFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\stopped-upgrade-20210817001119-111344\\client.key", CAFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), Key
Data:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x14d7000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 00:18:10.474862   94724 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "stopped-upgrade-20210817001119-111344" rescaled to 1
	I0817 00:18:10.475227   94724 start.go:226] Will wait 6m0s for node &{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
	I0817 00:18:10.477550   94724 out.go:177] * Verifying Kubernetes components...
	I0817 00:18:10.475572   94724 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0817 00:18:10.475399   94724 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 00:18:10.476082   94724 config.go:177] Loaded profile config "stopped-upgrade-20210817001119-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0817 00:18:10.477942   94724 addons.go:59] Setting default-storageclass=true in profile "stopped-upgrade-20210817001119-111344"
	I0817 00:18:10.477942   94724 addons.go:59] Setting storage-provisioner=true in profile "stopped-upgrade-20210817001119-111344"
	I0817 00:18:10.478121   94724 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-20210817001119-111344"
	I0817 00:18:10.478308   94724 addons.go:135] Setting addon storage-provisioner=true in "stopped-upgrade-20210817001119-111344"
	W0817 00:18:10.478308   94724 addons.go:147] addon storage-provisioner should already be in state true
	I0817 00:18:10.478308   94724 host.go:66] Checking if "stopped-upgrade-20210817001119-111344" exists ...
	I0817 00:18:10.488567   94724 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 00:18:10.495566   94724 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210817001119-111344 --format={{.State.Status}}
	I0817 00:18:10.495566   94724 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210817001119-111344 --format={{.State.Status}}
	I0817 00:18:08.813826   57864 ssh_runner.go:189] Completed: sudo systemctl restart docker: (8.3362054s)
	I0817 00:18:08.823882   57864 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:18:09.039331   57864 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0817 00:18:09.039331   57864 cache_images.go:74] Images are preloaded, skipping loading
	I0817 00:18:09.046155   57864 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0817 00:18:09.535494   57864 cni.go:93] Creating CNI manager for ""
	I0817 00:18:09.535494   57864 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:18:09.535740   57864 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 00:18:09.535740   57864 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210817001556-111344 NodeName:pause-20210817001556-111344 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 00:18:09.535944   57864 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "pause-20210817001556-111344"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 00:18:09.536164   57864 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=pause-20210817001556-111344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210817001556-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 00:18:09.545418   57864 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 00:18:09.582739   57864 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 00:18:09.592394   57864 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 00:18:09.637988   57864 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0817 00:18:09.719216   57864 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 00:18:09.791379   57864 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0817 00:18:09.877518   57864 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 00:18:09.911465   57864 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:18:09.955452   57864 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344 for IP: 192.168.49.2
	I0817 00:18:09.955981   57864 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0817 00:18:09.956103   57864 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0817 00:18:09.956647   57864 certs.go:297] generating minikube-user signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\client.key
	I0817 00:18:09.956647   57864 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\client.crt with IP's: []
	I0817 00:18:10.063608   57864 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\client.crt ...
	I0817 00:18:10.063608   57864 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\client.crt: {Name:mke4733b45c827287730e074dc1576582e44eaea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:10.065572   57864 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\client.key ...
	I0817 00:18:10.065572   57864 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\client.key: {Name:mk7a5df891bd8abacec34ddb2b11879945dc086c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:10.066571   57864 certs.go:297] generating minikube signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.key.dd3b5fb2
	I0817 00:18:10.066571   57864 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 00:18:10.234377   57864 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.crt.dd3b5fb2 ...
	I0817 00:18:10.234377   57864 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.crt.dd3b5fb2: {Name:mke2ac5ecd7f813d504116f6c00d3bfa6ffe4c6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:10.238518   57864 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.key.dd3b5fb2 ...
	I0817 00:18:10.238518   57864 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.key.dd3b5fb2: {Name:mkadfc443717bce9b2d9b94290bfdc0009669b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:10.239538   57864 certs.go:308] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.crt
	I0817 00:18:10.245493   57864 certs.go:312] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.key.dd3b5fb2 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.key
	I0817 00:18:10.247460   57864 certs.go:297] generating aggregator signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\proxy-client.key
	I0817 00:18:10.247460   57864 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\proxy-client.crt with IP's: []
	I0817 00:18:10.466358   57864 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\proxy-client.crt ...
	I0817 00:18:10.466358   57864 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\proxy-client.crt: {Name:mk52c2671618ce7871fa46c5dc35fc4c9ee0af28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:10.466358   57864 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\proxy-client.key ...
	I0817 00:18:10.466358   57864 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\proxy-client.key: {Name:mk47b9c19747c31579a734d0f00027f98297706c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:10.487565   57864 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem (1338 bytes)
	W0817 00:18:10.488567   57864 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344_empty.pem, impossibly tiny 0 bytes
	I0817 00:18:10.488567   57864 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0817 00:18:10.488567   57864 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0817 00:18:10.489569   57864 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0817 00:18:10.489569   57864 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0817 00:18:10.489569   57864 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem (1708 bytes)
	I0817 00:18:10.492566   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 00:18:10.583284   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 00:18:10.685187   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 00:18:10.782619   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210817001556-111344\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 00:18:10.864696   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 00:18:10.969366   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 00:18:11.099122   57864 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 00:18:11.117578   94724 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:18:11.117812   94724 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:18:11.117812   94724 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 00:18:11.121944   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:18:11.144634   94724 kapi.go:59] client config for stopped-upgrade-20210817001119-111344: &rest.Config{Host:"https://127.0.0.1:55137", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\stopped-upgrade-20210817001119-111344\\client.crt", KeyFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\stopped-upgrade-20210817001119-111344\\client.key", CAFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), Key
Data:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x14d7000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 00:18:11.230710   94724 addons.go:135] Setting addon default-storageclass=true in "stopped-upgrade-20210817001119-111344"
	W0817 00:18:11.230710   94724 addons.go:147] addon default-storageclass should already be in state true
	I0817 00:18:11.230869   94724 host.go:66] Checking if "stopped-upgrade-20210817001119-111344" exists ...
	I0817 00:18:11.244929   94724 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210817001119-111344 --format={{.State.Status}}
	I0817 00:18:11.287955   94724 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.18.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 00:18:11.305345   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:18:11.683097   94724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55139 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\stopped-upgrade-20210817001119-111344\id_rsa Username:docker}
	I0817 00:18:11.784133   94724 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 00:18:11.784133   94724 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 00:18:11.799069   94724 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210817001119-111344
	I0817 00:18:11.846442   94724 kubeadm.go:484] skip waiting for components based on config.
	I0817 00:18:11.846442   94724 node_conditions.go:102] verifying NodePressure condition ...
	I0817 00:18:11.867430   94724 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0817 00:18:11.867430   94724 node_conditions.go:123] node cpu capacity is 4
	I0817 00:18:11.867645   94724 node_conditions.go:105] duration metric: took 21.0599ms to run NodePressure ...
	I0817 00:18:11.867645   94724 start.go:231] waiting for startup goroutines ...
	I0817 00:18:12.350135   94724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55139 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\stopped-upgrade-20210817001119-111344\id_rsa Username:docker}
	I0817 00:18:12.428511   94724 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:18:08.836256   76328 ssh_runner.go:189] Completed: sudo systemctl restart docker: (3.2716529s)
	I0817 00:18:08.842083   76328 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0817 00:18:09.398791   76328 cni.go:93] Creating CNI manager for ""
	I0817 00:18:09.398791   76328 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:18:09.398791   76328 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 00:18:09.399092   76328 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:docker-flags-20210817001618-111344 NodeName:docker-flags-20210817001618-111344 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCA
File:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 00:18:09.399450   76328 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "docker-flags-20210817001618-111344"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 00:18:09.399910   76328 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=docker-flags-20210817001618-111344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:docker-flags-20210817001618-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 00:18:09.408702   76328 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 00:18:09.442486   76328 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 00:18:09.459787   76328 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 00:18:09.495155   76328 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (360 bytes)
	I0817 00:18:09.577021   76328 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 00:18:09.634867   76328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0817 00:18:09.729777   76328 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0817 00:18:09.748530   76328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:18:09.835620   76328 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344 for IP: 192.168.58.2
	I0817 00:18:09.836206   76328 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0817 00:18:09.836511   76328 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0817 00:18:09.836779   76328 certs.go:297] generating minikube-user signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\client.key
	I0817 00:18:09.837188   76328 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\client.crt with IP's: []
	I0817 00:18:10.090897   76328 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\client.crt ...
	I0817 00:18:10.090897   76328 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\client.crt: {Name:mk57bc2e13376b0a281d14b1feb14094994e15ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:10.092579   76328 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\client.key ...
	I0817 00:18:10.092579   76328 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\client.key: {Name:mkc4b2138d71265921abd903da04be62ec7b5276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:10.094075   76328 certs.go:297] generating minikube signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.key.cee25041
	I0817 00:18:10.094075   76328 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 00:18:10.462360   76328 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.crt.cee25041 ...
	I0817 00:18:10.462360   76328 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.crt.cee25041: {Name:mk1d0e88ea3d9d4737996b8f77a9396c0b5c8004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:10.464346   76328 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.key.cee25041 ...
	I0817 00:18:10.464346   76328 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.key.cee25041: {Name:mk7c3feaa2156fde79776e7f0ed37dad26455aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:10.465362   76328 certs.go:308] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.crt.cee25041 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.crt
	I0817 00:18:10.466358   76328 certs.go:312] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.key.cee25041 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.key
	I0817 00:18:10.486568   76328 certs.go:297] generating aggregator signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\proxy-client.key
	I0817 00:18:10.486568   76328 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\proxy-client.crt with IP's: []
	I0817 00:18:11.053348   76328 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\proxy-client.crt ...
	I0817 00:18:11.053348   76328 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\proxy-client.crt: {Name:mk7993bdcd28f0d9f2afe4dcb0b1b8358df5243d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:11.054339   76328 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\proxy-client.key ...
	I0817 00:18:11.054339   76328 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\proxy-client.key: {Name:mka45b4594fd46de059673e248a88a1c03058cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:18:11.054339   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0817 00:18:11.054339   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0817 00:18:11.054339   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0817 00:18:11.066907   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0817 00:18:11.067302   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 00:18:11.067777   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0817 00:18:11.068007   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 00:18:11.068247   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 00:18:11.068820   76328 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem (1338 bytes)
	W0817 00:18:11.069409   76328 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344_empty.pem, impossibly tiny 0 bytes
	I0817 00:18:11.069492   76328 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0817 00:18:11.069492   76328 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0817 00:18:11.069492   76328 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0817 00:18:11.069492   76328 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0817 00:18:11.070404   76328 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem (1708 bytes)
	I0817 00:18:11.070404   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:18:11.070404   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem -> /usr/share/ca-certificates/111344.pem
	I0817 00:18:11.070404   76328 vm_assets.go:99] NewFileAsset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> /usr/share/ca-certificates/1113442.pem
	I0817 00:18:11.073254   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 00:18:11.199822   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 00:18:11.335529   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 00:18:11.446146   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\docker-flags-20210817001618-111344\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 00:18:11.576372   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 00:18:11.701344   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 00:18:11.808229   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 00:18:11.933432   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 00:18:12.043483   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 00:18:12.163423   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem --> /usr/share/ca-certificates/111344.pem (1338 bytes)
	I0817 00:18:12.287191   76328 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /usr/share/ca-certificates/1113442.pem (1708 bytes)
	I0817 00:18:12.369284   76328 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 00:18:12.469804   76328 ssh_runner.go:149] Run: openssl version
	I0817 00:18:12.510443   76328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111344.pem && ln -fs /usr/share/ca-certificates/111344.pem /etc/ssl/certs/111344.pem"
	I0817 00:18:12.560468   76328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/111344.pem
	I0817 00:18:12.577364   76328 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 23:23 /usr/share/ca-certificates/111344.pem
	I0817 00:18:12.583537   76328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111344.pem
	I0817 00:18:12.629210   76328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/111344.pem /etc/ssl/certs/51391683.0"
	I0817 00:18:12.673226   76328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1113442.pem && ln -fs /usr/share/ca-certificates/1113442.pem /etc/ssl/certs/1113442.pem"
	I0817 00:18:12.734579   76328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1113442.pem
	I0817 00:18:12.751506   76328 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 23:23 /usr/share/ca-certificates/1113442.pem
	I0817 00:18:12.760071   76328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1113442.pem
	I0817 00:18:12.802882   76328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1113442.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 00:18:12.867193   76328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 00:18:12.930182   76328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:18:12.952425   76328 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:18:12.963408   76328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:18:12.993777   76328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 00:18:13.044859   76328 kubeadm.go:390] StartCluster: {Name:docker-flags-20210817001618-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:docker-flags-20210817001618-111344 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:18:13.051350   76328 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:18:13.197647   76328 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 00:18:13.258278   76328 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 00:18:13.290738   76328 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 00:18:13.307348   76328 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 00:18:12.914044   94724 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 00:18:13.773441   94724 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.18.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.4853088s)
	I0817 00:18:13.773441   94724 start.go:728] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0817 00:18:14.208763   94724 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.7801846s)
	I0817 00:18:14.208763   94724 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.2946701s)
	I0817 00:18:14.212723   94724 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 00:18:14.212879   94724 addons.go:344] enableAddons completed in 3.7372606s
	I0817 00:18:14.412872   94724 start.go:462] kubectl: 1.20.0, cluster: 1.18.0 (minor skew: 2)
	I0817 00:18:14.414571   94724 out.go:177] 
	W0817 00:18:14.415093   94724 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.18.0.
	I0817 00:18:14.416815   94724 out.go:177]   - Want kubectl v1.18.0? Try 'minikube kubectl -- get pods -A'
	I0817 00:18:14.418750   94724 out.go:177] * Done! kubectl is now configured to use "stopped-upgrade-20210817001119-111344" cluster and "" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2021-08-17 00:16:31 UTC, end at Tue 2021-08-17 00:18:23 UTC. --
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 systemd[1]: Starting Docker Application Container Engine...
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.859603300Z" level=info msg="Starting up"
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.870825600Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.871111200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.871232900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.871550900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.871896500Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006d9c00, CONNECTING" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.874170200Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006d9c00, READY" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.878069000Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.878140900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.878172300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.878195900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.878275000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000792cc0, CONNECTING" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.885878900Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000792cc0, READY" module=grpc
	Aug 17 00:16:31 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:31.931879200Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 17 00:16:32 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:32.091035500Z" level=info msg="Loading containers: start."
	Aug 17 00:16:32 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:32.854153100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 17 00:16:33 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:33.143508900Z" level=info msg="Loading containers: done."
	Aug 17 00:16:33 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:33.269855400Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
	Aug 17 00:16:33 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:33.270387000Z" level=info msg="Daemon has completed initialization"
	Aug 17 00:16:33 stopped-upgrade-20210817001119-111344 systemd[1]: Started Docker Application Container Engine.
	Aug 17 00:16:33 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:33.386779400Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 17 00:16:33 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:33.386975900Z" level=info msg="API listen on [::]:2376"
	Aug 17 00:16:37 stopped-upgrade-20210817001119-111344 systemd[1]: Stopping Docker Application Container Engine...
	Aug 17 00:16:37 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:37.853963400Z" level=info msg="Processing signal 'terminated'"
	Aug 17 00:16:37 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:37.857835400Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 17 00:16:37 stopped-upgrade-20210817001119-111344 dockerd[85]: time="2021-08-17T00:16:37.859758300Z" level=info msg="Daemon shutdown complete"
	Aug 17 00:16:37 stopped-upgrade-20210817001119-111344 systemd[1]: docker.service: Succeeded.
	Aug 17 00:16:37 stopped-upgrade-20210817001119-111344 systemd[1]: Stopped Docker Application Container Engine.
	Aug 17 00:16:37 stopped-upgrade-20210817001119-111344 systemd[1]: Starting Docker Application Container Engine...
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.123556100Z" level=info msg="Starting up"
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.128656600Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.128725800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.128761100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.128781900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.128895200Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000608850, CONNECTING" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.139834400Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000608850, READY" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.143788100Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.143855600Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.143894000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.143911400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.144000500Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006e68c0, CONNECTING" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.150712700Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006e68c0, READY" module=grpc
	Aug 17 00:16:38 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:38.178494100Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 17 00:16:39 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:39.123239100Z" level=info msg="Loading containers: start."
	Aug 17 00:16:40 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:40.009840200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 17 00:16:40 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:40.670728400Z" level=info msg="Loading containers: done."
	Aug 17 00:16:40 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:40.798265200Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
	Aug 17 00:16:40 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:40.798398300Z" level=info msg="Daemon has completed initialization"
	Aug 17 00:16:41 stopped-upgrade-20210817001119-111344 systemd[1]: Started Docker Application Container Engine.
	Aug 17 00:16:41 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:41.034189300Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 17 00:16:41 stopped-upgrade-20210817001119-111344 dockerd[364]: time="2021-08-17T00:16:41.034935600Z" level=info msg="API listen on [::]:2376"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	5eb9884981d78       a31f78c7c8ce1       48 seconds ago      Running             kube-scheduler            0                   4a8207dc7e3fc
	d39476c0f9e50       303ce5db0e90d       48 seconds ago      Running             etcd                      0                   777dfc5bcfeb8
	7d449783b28fc       d3e55153f52fb       49 seconds ago      Running             kube-controller-manager   0                   7766c7a3ba193
	370c67f2bdcdc       74060cea7f704       49 seconds ago      Running             kube-apiserver            1                   fb1a553708312
	198008788b875       67da37a9a360e       2 minutes ago       Created             coredns                   0                   71b387ab00d00
	2af10de3bff09       4689081edb103       2 minutes ago       Exited              storage-provisioner       0                   ee3aa5f5db60f
	90720ba7f59fa       43940c34f24f3       2 minutes ago       Exited              kube-proxy                0                   30fb85ddd6169
	4d3dcc17996d2       aa67fec7d7ef7       2 minutes ago       Exited              kindnet-cni               0                   b358d77f37d22
	b1c3fba68b4c7       a31f78c7c8ce1       3 minutes ago       Exited              kube-scheduler            0                   150878ef645b7
	7a43cdf5f68b9       74060cea7f704       3 minutes ago       Exited              kube-apiserver            0                   88960f7d5222f
	1cd4a638b1b8a       d3e55153f52fb       3 minutes ago       Exited              kube-controller-manager   0                   5476fdc77b956
	0d5269c4aa848       303ce5db0e90d       3 minutes ago       Exited              etcd                      0                   6b07cb79d6e42
	
	* 
	* ==> coredns [198008788b87] <==
	* 
	* 
	* ==> describe nodes <==
	* Name:               stopped-upgrade-20210817001119-111344
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=stopped-upgrade-20210817001119-111344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=48fefd43444d2f8852f527c78f0141b377b1e42a
	                    minikube.k8s.io/name=stopped-upgrade-20210817001119-111344
	                    minikube.k8s.io/updated_at=2021_08_17T00_15_45_0700
	                    minikube.k8s.io/version=v1.9.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 00:15:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  stopped-upgrade-20210817001119-111344
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 00:18:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 00:18:10 +0000   Tue, 17 Aug 2021 00:15:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 00:18:10 +0000   Tue, 17 Aug 2021 00:15:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 00:18:10 +0000   Tue, 17 Aug 2021 00:15:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 00:18:10 +0000   Tue, 17 Aug 2021 00:18:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.0.2
	  Hostname:    stopped-upgrade-20210817001119-111344
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 38136d49d88f4c39a0833e4861abb136
	  System UUID:                5a3fa648-7607-483d-b613-b6cf059702db
	  Boot ID:                    59d49a8b-044c-440e-a1d3-94e728b56235
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 19.10
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://19.3.2
	  Kubelet Version:            v1.18.0
	  Kube-Proxy Version:         v1.18.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bff467f8-2zqdn                                         100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m29s
	  kube-system                 coredns-66bff467f8-6d9nh                                         100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m30s
	  kube-system                 etcd-stopped-upgrade-20210817001119-111344                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-ngdjs                                                    100m (2%!)(MISSING)     100m (2%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m29s
	  kube-system                 kube-apiserver-stopped-upgrade-20210817001119-111344             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-controller-manager-stopped-upgrade-20210817001119-111344    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kube-proxy-zbbt2                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-scheduler-stopped-upgrade-20210817001119-111344             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 storage-provisioner                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (21%!)(MISSING)  100m (2%!)(MISSING)
	  memory             190Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                               Message
	  ----    ------                   ----               ----                                               -------
	  Normal  Starting                 2m42s              kubelet, stopped-upgrade-20210817001119-111344     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m42s              kubelet, stopped-upgrade-20210817001119-111344     Node stopped-upgrade-20210817001119-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s              kubelet, stopped-upgrade-20210817001119-111344     Node stopped-upgrade-20210817001119-111344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m42s              kubelet, stopped-upgrade-20210817001119-111344     Node stopped-upgrade-20210817001119-111344 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m41s              kubelet, stopped-upgrade-20210817001119-111344     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m31s              kubelet, stopped-upgrade-20210817001119-111344     Node stopped-upgrade-20210817001119-111344 status is now: NodeReady
	  Normal  Starting                 2m23s              kube-proxy, stopped-upgrade-20210817001119-111344  Starting kube-proxy.
	  Normal  Starting                 57s                kubelet, stopped-upgrade-20210817001119-111344     Starting kubelet.
	  Normal  NodeAllocatableEnforced  56s                kubelet, stopped-upgrade-20210817001119-111344     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  55s (x8 over 56s)  kubelet, stopped-upgrade-20210817001119-111344     Node stopped-upgrade-20210817001119-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 56s)  kubelet, stopped-upgrade-20210817001119-111344     Node stopped-upgrade-20210817001119-111344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x7 over 56s)  kubelet, stopped-upgrade-20210817001119-111344     Node stopped-upgrade-20210817001119-111344 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [ +17.293522] PCI: System does not support PCI
	[  +0.312283] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
	[  +0.348937] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.028097] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
	[  +0.006857] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
	[ +11.342368] grpcfuse: loading out-of-tree module taints kernel.
	[Aug16 23:12] hrtimer: interrupt took 3543100 ns
	[Aug16 23:13] ------------[ cut here ]------------
	[  +0.000002] rq->tmp_alone_branch != &rq->leaf_cfs_rq_list
	[  +0.000110] WARNING: CPU: 1 PID: 0 at kernel/sched/fair.c:375 assert_list_leaf_cfs_rq+0x2c/0x2f
	[  +0.000001] Modules linked in: xfrm_user xfrm_algo bpfilter grpcfuse(O) hv_sock vsock
	[  +0.000143] CPU: 1 PID: 0 Comm: swapper/1 Tainted: G           O    T 4.19.121-linuxkit #1
	[  +0.000002] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.0 12/17/2019
	[  +0.000004] RIP: 0010:assert_list_leaf_cfs_rq+0x2c/0x2f
	[  +0.000002] Code: 87 68 09 00 00 48 3b 87 78 09 00 00 74 1e 80 3d 8e 85 23 01 00 75 15 48 c7 c7 7f 4b fe 91 c6 05 7e 85 23 01 01 e8 a6 68 fd ff <0f> 0b c3 0f 1f 44 00 00 83 be dc 04 00 00 01 75 1d 48 3b b7 88 09
	[  +0.000002] RSP: 0018:ffff8d1631c43e80 EFLAGS: 00010082
	[  +0.000084] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: ffffffff9282910d RDI: 0000000000000046
	[  +0.000001] RBP: ffff8d1631ca1880 R08: 0000007cfc122c99 R09: 000000000000002d
	[  +0.000001] R10: 0000000000000046 R11: 0000000000000000 R12: ffff8d161be38000
	[  +0.000001] R13: ffff8d161be38000 R14: ffff8d161be38180 R15: 0000000000000002
	[  +0.000008] FS:  0000000000000000(0000) GS:ffff8d1631c40000(0000) knlGS:0000000000000000
	[  +0.000001] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
	[  +0.000001] CR2: 000000c00148a078 CR3: 000000042020c002 CR4: 00000000001606a0
	[  +0.000024] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
	[  +0.000001] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
	[  +0.000001] Call Trace:
	[  +0.000034]  <IRQ>
	[  +0.000005]  enqueue_task_fair+0xfa/0x125
	[  +0.000026]  ttwu_do_activate+0x44/0x7f
	[  +0.000010]  try_to_wake_up+0x25f/0x2a7
	[  +0.000032]  ? hrtimer_init+0xde/0xde
	[  +0.000002]  hrtimer_wakeup+0x1e/0x21
	[  +0.000022]  __hrtimer_run_queues+0x117/0x1c4
	[  +0.000010]  ? ktime_get_update_offsets_now+0x36/0x95
	[  +0.000003]  hrtimer_interrupt+0x92/0x165
	[  +0.000044]  hv_stimer0_isr+0x20/0x2d
	[  +0.000053]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000021]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000002]  </IRQ>
	[  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	[  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 6e ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 6e 31 ff e8
	[  +0.000001] RSP: 0018:ffffb51d800a3ec8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	[  +0.000002] RAX: ffffffff91918b30 RBX: 0000000000000001 RCX: ffffffff92253150
	[  +0.000001] RDX: 0000000000171622 RSI: 0000000000000001 RDI: 0000000000000001
	[  +0.000001] RBP: 0000000000000000 R08: 0000007cfc1104b2 R09: 0000000000000002
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: ffff8d162e19ef80 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002]  ? __sched_text_end+0x1/0x1
	[  +0.000021]  ? native_safe_halt+0x5/0x8
	[  +0.000002]  default_idle+0x1b/0x2c
	[  +0.000003]  do_idle+0xe5/0x216
	[  +0.000003]  cpu_startup_entry+0x6f/0x71
	[  +0.000019]  start_secondary+0x18e/0x1a9
	[  +0.000032]  secondary_startup_64+0xa4/0xb0
	[  +0.000020] ---[ end trace b7d34331c4afdfb9 ]---
	[Aug17 00:14] tee (131347): /proc/127190/oom_adj is deprecated, please use /proc/127190/oom_score_adj instead.
	
	* 
	* ==> etcd [0d5269c4aa84] <==
	* [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	2021-08-17 00:15:24.558273 I | etcdmain: etcd Version: 3.4.3
	2021-08-17 00:15:24.558362 I | etcdmain: Git SHA: 3cf2f69b5
	2021-08-17 00:15:24.558367 I | etcdmain: Go Version: go1.12.12
	2021-08-17 00:15:24.558371 I | etcdmain: Go OS/Arch: linux/amd64
	2021-08-17 00:15:24.558377 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
	[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
	2021-08-17 00:15:24.558559 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 00:15:24.769502 I | embed: name = stopped-upgrade-20210817001119-111344
	2021-08-17 00:15:24.769533 I | embed: data dir = /var/lib/minikube/etcd
	2021-08-17 00:15:24.775622 I | embed: member dir = /var/lib/minikube/etcd/member
	2021-08-17 00:15:24.775634 I | embed: heartbeat = 100ms
	2021-08-17 00:15:24.775638 I | embed: election = 1000ms
	2021-08-17 00:15:24.775643 I | embed: snapshot count = 10000
	2021-08-17 00:15:24.775685 I | embed: advertise client URLs = https://172.17.0.2:2379
	2021-08-17 00:15:25.734216 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
	raft2021/08/17 00:15:25 INFO: b8e14bda2255bc24 switched to configuration voters=()
	raft2021/08/17 00:15:25 INFO: b8e14bda2255bc24 became follower at term 0
	raft2021/08/17 00:15:25 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2021/08/17 00:15:25 INFO: b8e14bda2255bc24 became follower at term 1
	raft2021/08/17 00:15:25 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
	2021-08-17 00:15:26.455014 W | auth: simple token is not cryptographically signed
	2021-08-17 00:15:26.592715 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2021-08-17 00:15:26.639378 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/17 00:15:26 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
	2021-08-17 00:15:26.687267 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
	2021-08-17 00:15:26.801338 I | embed: listening for peers on 172.17.0.2:2380
	2021-08-17 00:15:26.841167 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 00:15:26.886009 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/17 00:15:26 INFO: b8e14bda2255bc24 is starting a new election at term 1
	raft2021/08/17 00:15:26 INFO: b8e14bda2255bc24 became candidate at term 2
	raft2021/08/17 00:15:26 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
	raft2021/08/17 00:15:26 INFO: b8e14bda2255bc24 became leader at term 2
	raft2021/08/17 00:15:26 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
	2021-08-17 00:15:26.912099 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-17 00:15:26.916734 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 00:15:26.916980 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 00:15:26.917113 I | etcdserver: published {Name:stopped-upgrade-20210817001119-111344 ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
	2021-08-17 00:15:26.917307 I | embed: ready to serve client requests
	2021-08-17 00:15:26.917450 I | embed: ready to serve client requests
	2021-08-17 00:15:26.938585 I | embed: serving client requests on 172.17.0.2:2379
	2021-08-17 00:15:26.963609 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 00:15:37.460743 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:0 size:4" took too long (119.9857ms) to execute
	2021-08-17 00:15:37.493179 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (108.9275ms) to execute
	2021-08-17 00:15:57.168931 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:173" took too long (133.6824ms) to execute
	2021-08-17 00:15:57.170907 W | etcdserver: request "header:<ID:13557096368102758860 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/edit\" mod_revision:355 > success:<request_put:<key:\"/registry/clusterroles/edit\" value_size:3203 >> failure:<request_range:<key:\"/registry/clusterroles/edit\" > >>" with result "size:16" took too long (101.7123ms) to execute
	2021-08-17 00:15:57.224361 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-6d9nh\" " with result "range_response_count:1 size:3349" took too long (151.9459ms) to execute
	2021-08-17 00:15:58.808307 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:631" took too long (205.6119ms) to execute
	2021-08-17 00:16:07.935598 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2021/08/17 00:16:07 grpc: addrConn.createTransport failed to connect to {172.17.0.2:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 172.17.0.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2021/08/17 00:16:08 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	2021-08-17 00:16:08.054934 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	* 
	* ==> etcd [d39476c0f9e5] <==
	* 2021-08-17 00:17:42.254673 I | embed: data dir = /var/lib/minikube/etcd
	2021-08-17 00:17:42.254894 I | embed: member dir = /var/lib/minikube/etcd/member
	2021-08-17 00:17:42.255137 I | embed: heartbeat = 100ms
	2021-08-17 00:17:42.255346 I | embed: election = 1000ms
	2021-08-17 00:17:42.255594 I | embed: snapshot count = 10000
	2021-08-17 00:17:42.256172 I | embed: advertise client URLs = https://172.17.0.2:2379
	2021-08-17 00:17:42.292660 I | embed: initial advertise peer URLs = https://172.17.0.2:2380
	2021-08-17 00:17:42.292690 I | embed: initial cluster = 
	2021-08-17 00:17:42.703858 I | etcdserver: restarting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f at commit index 464
	raft2021/08/17 00:17:42 INFO: b8e14bda2255bc24 switched to configuration voters=()
	raft2021/08/17 00:17:42 INFO: b8e14bda2255bc24 became follower at term 2
	raft2021/08/17 00:17:42 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 2, commit: 464, applied: 0, lastindex: 464, lastterm: 2]
	2021-08-17 00:17:42.967783 W | auth: simple token is not cryptographically signed
	2021-08-17 00:17:43.228325 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2021/08/17 00:17:43 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
	2021-08-17 00:17:43.750728 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
	2021-08-17 00:17:43.793138 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-17 00:17:43.793386 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-17 00:17:45.667247 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-17 00:17:45.671730 I | embed: listening for peers on 172.17.0.2:2380
	2021-08-17 00:17:45.679866 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/17 00:17:47 INFO: b8e14bda2255bc24 is starting a new election at term 2
	raft2021/08/17 00:17:47 INFO: b8e14bda2255bc24 became candidate at term 3
	raft2021/08/17 00:17:47 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 3
	raft2021/08/17 00:17:47 INFO: b8e14bda2255bc24 became leader at term 3
	raft2021/08/17 00:17:47 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 3
	2021-08-17 00:17:47.013368 I | etcdserver: published {Name:stopped-upgrade-20210817001119-111344 ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
	2021-08-17 00:17:47.013437 I | embed: ready to serve client requests
	2021-08-17 00:17:47.013768 I | embed: ready to serve client requests
	2021-08-17 00:17:47.020453 I | embed: serving client requests on 172.17.0.2:2379
	2021-08-17 00:17:47.026952 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-17 00:17:53.035195 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (114.3399ms) to execute
	2021-08-17 00:18:02.451463 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (687.6793ms) to execute
	2021-08-17 00:18:02.451689 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.5490965s) to execute
	2021-08-17 00:18:02.451845 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.5038036s) to execute
	2021-08-17 00:18:02.452759 W | etcdserver: read-only range request "key:\"/registry/events/default/stopped-upgrade-20210817001119-111344.169befe365b46578\" " with result "range_response_count:1 size:795" took too long (1.3782938s) to execute
	2021-08-17 00:18:02.454245 W | etcdserver: read-only range request "key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" " with result "range_response_count:1 size:709" took too long (1.5396871s) to execute
	2021-08-17 00:18:04.358257 W | wal: sync duration of 1.1338812s, expected less than 1s
	2021-08-17 00:18:04.436404 W | etcdserver: read-only range request "key:\"/registry/events/default/stopped-upgrade-20210817001119-111344.169befe365b47bbc\" " with result "range_response_count:1 size:793" took too long (1.9636123s) to execute
	2021-08-17 00:18:04.437882 W | etcdserver: request "header:<ID:13557096368137779319 > lease_revoke:<id:3c247b517680da28>" with result "size:28" took too long (1.2126076s) to execute
	2021-08-17 00:18:04.441034 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.6771724s) to execute
	2021-08-17 00:18:04.443088 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system::leader-locking-kube-controller-manager\" " with result "range_response_count:1 size:892" took too long (1.9641292s) to execute
	2021-08-17 00:18:04.444061 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.8848615s) to execute
	2021-08-17 00:18:05.396443 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (2.0043602s) to execute
	WARNING: 2021/08/17 00:18:05 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-17 00:18:05.988677 W | wal: sync duration of 1.5479614s, expected less than 1s
	2021-08-17 00:18:05.991160 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system::leader-locking-kube-scheduler\" " with result "range_response_count:1 size:847" took too long (1.5428363s) to execute
	2021-08-17 00:18:05.992206 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.2271469s) to execute
	2021-08-17 00:18:05.992937 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.4444733s) to execute
	2021-08-17 00:18:05.993302 W | etcdserver: read-only range request "key:\"/registry/events/default/stopped-upgrade-20210817001119-111344.169befe365b46578\" " with result "range_response_count:1 size:795" took too long (1.5314037s) to execute
	2021-08-17 00:18:06.219908 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (171.7694ms) to execute
	2021-08-17 00:18:06.223165 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system:controller:cloud-provider\" " with result "range_response_count:1 size:772" took too long (221.623ms) to execute
	2021-08-17 00:18:06.226377 W | etcdserver: read-only range request "key:\"/registry/events/default/stopped-upgrade-20210817001119-111344.169befe365b43508\" " with result "range_response_count:1 size:799" took too long (217.8588ms) to execute
	2021-08-17 00:18:07.839089 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.0734496s) to execute
	2021-08-17 00:18:07.839933 W | etcdserver: read-only range request "key:\"/registry/events/default/stopped-upgrade-20210817001119-111344.169befe365b43508\" " with result "range_response_count:1 size:799" took too long (1.5163433s) to execute
	2021-08-17 00:18:07.840809 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.4892398s) to execute
	2021-08-17 00:18:07.841906 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-public/system:controller:bootstrap-signer\" " with result "range_response_count:1 size:780" took too long (1.5283817s) to execute
	2021-08-17 00:18:07.843546 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.4391796s) to execute
	2021-08-17 00:18:19.874133 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-stopped-upgrade-20210817001119-111344\" " with result "range_response_count:1 size:3821" took too long (343.9035ms) to execute
	2021-08-17 00:18:19.874804 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (225.1126ms) to execute
	
	* 
	* ==> kernel <==
	*  00:18:28 up  1:14,  0 users,  load average: 14.99, 10.91, 6.32
	Linux stopped-upgrade-20210817001119-111344 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 19.10"
	
	* 
	* ==> kube-apiserver [370c67f2bdcd] <==
	* I0817 00:17:59.256762       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0817 00:17:59.390998       1 log.go:172] http: TLS handshake error from 172.17.0.2:41664: EOF
	I0817 00:17:59.407065       1 log.go:172] http: TLS handshake error from 172.17.0.2:41636: EOF
	I0817 00:17:59.425918       1 log.go:172] http: TLS handshake error from 172.17.0.2:41638: EOF
	I0817 00:17:59.437297       1 log.go:172] http: TLS handshake error from 172.17.0.2:41668: EOF
	I0817 00:17:59.549532       1 log.go:172] http: TLS handshake error from 172.17.0.2:41666: EOF
	I0817 00:17:59.593220       1 log.go:172] http: TLS handshake error from 172.17.0.2:41758: EOF
	I0817 00:17:59.676763       1 log.go:172] http: TLS handshake error from 172.17.0.2:41650: EOF
	I0817 00:17:59.684468       1 log.go:172] http: TLS handshake error from 172.17.0.2:41634: EOF
	I0817 00:17:59.798318       1 log.go:172] http: TLS handshake error from 172.17.0.2:41644: EOF
	I0817 00:17:59.818090       1 log.go:172] http: TLS handshake error from 172.17.0.2:41640: EOF
	I0817 00:17:59.823821       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0817 00:17:59.863509       1 cache.go:39] Caches are synced for autoregister controller
	I0817 00:17:59.863924       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 00:17:59.872564       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 00:17:59.890981       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0817 00:17:59.902352       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 00:17:59.985777       1 trace.go:116] Trace[1846177694]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/9e99141,client:172.17.0.2 (started: 2021-08-17 00:17:59.4771405 +0000 UTC m=+21.241145701) (total time: 508.391ms):
	Trace[1846177694]: [508.391ms] [500.6885ms] END
	I0817 00:18:00.067474       1 trace.go:116] Trace[50189895]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/9e99141,client:172.17.0.2 (started: 2021-08-17 00:17:59.4637831 +0000 UTC m=+21.227788301) (total time: 603.6442ms):
	Trace[50189895]: [603.5503ms] [602.0077ms] Object stored in database
	E0817 00:18:00.071966       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0817 00:18:00.201148       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 00:18:00.201760       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 00:18:00.261378       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0817 00:18:02.457051       1 trace.go:116] Trace[1691916740]: "Get" url:/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/9e99141,client:127.0.0.1 (started: 2021-08-17 00:18:00.9119621 +0000 UTC m=+22.675967401) (total time: 1.545042s):
	Trace[1691916740]: [1.5449839s] [1.5449714s] About to write a response
	I0817 00:18:02.469208       1 trace.go:116] Trace[1825289909]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2021-08-17 00:18:01.0730725 +0000 UTC m=+22.837077801) (total time: 1.3960977s):
	Trace[1825289909]: [1.3826657s] [1.3826657s] initial value restored
	I0817 00:18:02.469329       1 trace.go:116] Trace[2088841084]: "Patch" url:/api/v1/namespaces/default/events/stopped-upgrade-20210817001119-111344.169befe365b46578,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/9e99141,client:172.17.0.2 (started: 2021-08-17 00:18:01.072523 +0000 UTC m=+22.836528201) (total time: 1.3967738s):
	Trace[2088841084]: [1.3832183s] [1.3831784s] About to apply patch
	I0817 00:18:04.446022       1 trace.go:116] Trace[387899727]: "Get" url:/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/9e99141,client:127.0.0.1 (started: 2021-08-17 00:18:02.4761892 +0000 UTC m=+24.240194501) (total time: 1.9697749s):
	Trace[387899727]: [1.9697077s] [1.9697012s] About to write a response
	I0817 00:18:04.448537       1 trace.go:116] Trace[927730975]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2021-08-17 00:18:02.4713929 +0000 UTC m=+24.235398201) (total time: 1.9771151s):
	Trace[927730975]: [1.9673893s] [1.9673893s] initial value restored
	I0817 00:18:04.449142       1 trace.go:116] Trace[1071885424]: "Patch" url:/api/v1/namespaces/default/events/stopped-upgrade-20210817001119-111344.169befe365b47bbc,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/9e99141,client:172.17.0.2 (started: 2021-08-17 00:18:02.4712949 +0000 UTC m=+24.235300201) (total time: 1.9778141s):
	Trace[1071885424]: [1.9674909s] [1.9674521s] About to apply patch
	I0817 00:18:05.994476       1 trace.go:116] Trace[830290457]: "Get" url:/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/9e99141,client:127.0.0.1 (started: 2021-08-17 00:18:04.4474186 +0000 UTC m=+26.211423801) (total time: 1.5469587s):
	Trace[830290457]: [1.5468148s] [1.5468058s] About to write a response
	I0817 00:18:06.003873       1 trace.go:116] Trace[730926505]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2021-08-17 00:18:04.4583073 +0000 UTC m=+26.222312601) (total time: 1.5455337s):
	Trace[730926505]: [1.5362385s] [1.5362385s] initial value restored
	I0817 00:18:06.004029       1 trace.go:116] Trace[559686891]: "Patch" url:/api/v1/namespaces/default/events/stopped-upgrade-20210817001119-111344.169befe365b46578,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/9e99141,client:172.17.0.2 (started: 2021-08-17 00:18:04.4580962 +0000 UTC m=+26.222101501) (total time: 1.5459052s):
	Trace[559686891]: [1.5364528s] [1.5364158s] About to apply patch
	I0817 00:18:07.850718       1 trace.go:116] Trace[1531148166]: "Get" url:/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/9e99141,client:127.0.0.1 (started: 2021-08-17 00:18:06.3094183 +0000 UTC m=+28.073423601) (total time: 1.5411539s):
	Trace[1531148166]: [1.5407891s] [1.5407811s] About to write a response
	I0817 00:18:07.855944       1 trace.go:116] Trace[1018722293]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2021-08-17 00:18:06.3219806 +0000 UTC m=+28.085985801) (total time: 1.533936s):
	Trace[1018722293]: [1.524862s] [1.524862s] initial value restored
	I0817 00:18:07.856312       1 trace.go:116] Trace[1579664631]: "Patch" url:/api/v1/namespaces/default/events/stopped-upgrade-20210817001119-111344.169befe365b43508,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/9e99141,client:172.17.0.2 (started: 2021-08-17 00:18:06.3218905 +0000 UTC m=+28.085895801) (total time: 1.5343188s):
	Trace[1579664631]: [1.5249605s] [1.5249311s] About to apply patch
	I0817 00:18:08.889765       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0817 00:18:08.982833       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0817 00:18:09.339260       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0817 00:18:09.453948       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 00:18:09.495856       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 00:18:14.100023       1 controller.go:606] quota admission added evaluator for: endpoints
	I0817 00:18:19.924197       1 trace.go:116] Trace[1664131935]: "Delete" url:/api/v1/namespaces/kube-system/pods/kube-scheduler-stopped-upgrade-20210817001119-111344,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/9e99141,client:172.17.0.2 (started: 2021-08-17 00:18:19.0949942 +0000 UTC m=+40.858367301) (total time: 829.0539ms):
	Trace[1664131935]: [142.7161ms] [142.7161ms] Decoded delete options
	Trace[1664131935]: [828.9556ms] [686.2329ms] Object deleted from database
	I0817 00:18:22.798028       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 00:18:23.060810       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [7a43cdf5f68b] <==
	* W0817 00:16:16.427553       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.436361       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.615354       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.665916       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.703791       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.722858       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.733617       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.806715       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.813387       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.821105       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.832563       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.854638       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.856349       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.895048       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.914782       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:16.997248       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.002601       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.028598       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.054656       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.055000       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.062521       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.082241       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.084799       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.124906       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.125188       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.172048       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.180896       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.183964       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.212310       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.236119       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.261292       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.270345       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.270756       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.321929       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.364042       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.366796       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.424264       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.433835       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.468831       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.491523       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.510670       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.534191       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.565905       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.601733       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.611214       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.614834       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.623217       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.653520       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.708187       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.720283       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.726169       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.837283       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.849523       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.858026       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:17.897529       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:18.087220       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:18.119306       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:18.138380       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:18.212117       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:16:18.316876       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [1cd4a638b1b8] <==
	* 
	* ==> kube-controller-manager [7d449783b28f] <==
	* I0817 00:18:22.091687       1 controllermanager.go:533] Started "replicaset"
	I0817 00:18:22.091865       1 replica_set.go:181] Starting replicaset controller
	I0817 00:18:22.091873       1 shared_informer.go:223] Waiting for caches to sync for ReplicaSet
	I0817 00:18:22.246010       1 controllermanager.go:533] Started "horizontalpodautoscaling"
	I0817 00:18:22.246665       1 horizontal.go:169] Starting HPA controller
	I0817 00:18:22.246679       1 shared_informer.go:223] Waiting for caches to sync for HPA
	I0817 00:18:22.347648       1 disruption.go:331] Starting disruption controller
	I0817 00:18:22.347666       1 shared_informer.go:223] Waiting for caches to sync for disruption
	I0817 00:18:22.347691       1 controllermanager.go:533] Started "disruption"
	I0817 00:18:22.413015       1 node_lifecycle_controller.go:384] Sending events to api server.
	I0817 00:18:22.413346       1 taint_manager.go:163] Sending events to api server.
	I0817 00:18:22.413770       1 node_lifecycle_controller.go:512] Controller will reconcile labels.
	I0817 00:18:22.413810       1 controllermanager.go:533] Started "nodelifecycle"
	I0817 00:18:22.415369       1 node_lifecycle_controller.go:546] Starting node controller
	I0817 00:18:22.415563       1 shared_informer.go:223] Waiting for caches to sync for taint
	I0817 00:18:22.416951       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0817 00:18:22.674881       1 shared_informer.go:230] Caches are synced for HPA 
	I0817 00:18:22.711847       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0817 00:18:22.716917       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0817 00:18:22.718182       1 shared_informer.go:230] Caches are synced for job 
	I0817 00:18:22.730460       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I0817 00:18:22.730699       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0817 00:18:22.730732       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0817 00:18:22.755095       1 shared_informer.go:230] Caches are synced for expand 
	I0817 00:18:22.757368       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I0817 00:18:22.764814       1 shared_informer.go:230] Caches are synced for service account 
	I0817 00:18:22.764878       1 shared_informer.go:230] Caches are synced for PV protection 
	I0817 00:18:22.795812       1 shared_informer.go:230] Caches are synced for namespace 
	W0817 00:18:22.832633       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="stopped-upgrade-20210817001119-111344" does not exist
	I0817 00:18:22.835444       1 shared_informer.go:230] Caches are synced for stateful set 
	I0817 00:18:22.848390       1 shared_informer.go:230] Caches are synced for attach detach 
	I0817 00:18:22.852197       1 shared_informer.go:230] Caches are synced for endpoint 
	I0817 00:18:22.853070       1 shared_informer.go:230] Caches are synced for TTL 
	I0817 00:18:22.873972       1 shared_informer.go:230] Caches are synced for GC 
	I0817 00:18:22.877386       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0817 00:18:22.900889       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0817 00:18:22.901852       1 shared_informer.go:230] Caches are synced for node 
	I0817 00:18:22.901894       1 range_allocator.go:172] Starting range CIDR allocator
	I0817 00:18:22.901900       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
	I0817 00:18:22.901906       1 shared_informer.go:230] Caches are synced for cidrallocator 
	I0817 00:18:22.917163       1 shared_informer.go:230] Caches are synced for taint 
	I0817 00:18:22.917762       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	I0817 00:18:22.918097       1 taint_manager.go:187] Starting NoExecuteTaintManager
	W0817 00:18:22.921399       1 node_lifecycle_controller.go:1048] Missing timestamp for Node stopped-upgrade-20210817001119-111344. Assuming now as a timestamp.
	I0817 00:18:22.922083       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"stopped-upgrade-20210817001119-111344", UID:"7d7c20df-070e-455a-9cff-4cae2fae47b0", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node stopped-upgrade-20210817001119-111344 event: Registered Node stopped-upgrade-20210817001119-111344 in Controller
	I0817 00:18:22.922335       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0817 00:18:22.995561       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0817 00:18:23.011427       1 shared_informer.go:230] Caches are synced for deployment 
	I0817 00:18:23.032441       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0817 00:18:23.048122       1 shared_informer.go:230] Caches are synced for disruption 
	I0817 00:18:23.048144       1 disruption.go:339] Sending events to api server.
	I0817 00:18:23.051361       1 request.go:621] Throttling request took 1.0071163s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I0817 00:18:23.091656       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"8529cda8-3bf8-4d8d-adf2-dd26e917a9f7", APIVersion:"apps/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0817 00:18:23.117397       1 shared_informer.go:230] Caches are synced for resource quota 
	I0817 00:18:23.129946       1 shared_informer.go:230] Caches are synced for resource quota 
	I0817 00:18:23.135309       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0817 00:18:23.135323       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 00:18:23.220423       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"43c48c68-2335-4b8d-a51a-a43a451bcc55", APIVersion:"apps/v1", ResourceVersion:"535", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-2zqdn
	I0817 00:18:23.504098       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0817 00:18:23.504158       1 shared_informer.go:230] Caches are synced for garbage collector 
	
	* 
	* ==> kube-proxy [90720ba7f59f] <==
	* W0817 00:16:03.714886       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0817 00:16:03.788528       1 node.go:136] Successfully retrieved node IP: 172.17.0.2
	I0817 00:16:03.789413       1 server_others.go:186] Using iptables Proxier.
	I0817 00:16:03.791533       1 server.go:583] Version: v1.18.0
	I0817 00:16:03.792982       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
	I0817 00:16:03.793019       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	I0817 00:16:03.793528       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0817 00:16:03.793594       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0817 00:16:03.801364       1 config.go:315] Starting service config controller
	I0817 00:16:03.801450       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0817 00:16:03.801527       1 config.go:133] Starting endpoints config controller
	I0817 00:16:03.802690       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0817 00:16:03.905458       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0817 00:16:03.905703       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5eb9884981d7] <==
	* I0817 00:17:39.325983       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0817 00:17:39.326531       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0817 00:17:42.761842       1 serving.go:313] Generated self-signed cert in-memory
	W0817 00:17:55.704004       1 authentication.go:297] Error looking up in-cluster authentication configuration: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: net/http: TLS handshake timeout
	W0817 00:17:55.704196       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 00:17:55.704670       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 00:17:59.891278       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0817 00:17:59.906964       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	W0817 00:17:59.925555       1 authorization.go:47] Authorization is disabled
	W0817 00:17:59.925752       1 authentication.go:40] Authentication is disabled
	I0817 00:17:59.925892       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0817 00:17:59.948955       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0817 00:17:59.949514       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 00:17:59.949566       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 00:17:59.949354       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0817 00:18:00.059896       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [b1c3fba68b4c] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 00:16:31 UTC, end at Tue 2021-08-17 00:18:34 UTC. --
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.091811    1156 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7c6ba66b-a666-4d7d-af75-8ef87a624dd4-config-volume") pod "coredns-66bff467f8-6d9nh" (UID: "7c6ba66b-a666-4d7d-af75-8ef87a624dd4")
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.091922    1156 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/ccfaa503-d095-40cf-aa5c-acf1be35dce4-lib-modules") pod "kindnet-ngdjs" (UID: "ccfaa503-d095-40cf-aa5c-acf1be35dce4")
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.111034    1156 topology_manager.go:233] [topologymanager] Topology Admit Handler
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.125900    1156 topology_manager.go:233] [topologymanager] Topology Admit Handler
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:19.207142    1156 kubelet.go:1650] Deleted mirror pod "kube-controller-manager-stopped-upgrade-20210817001119-111344_kube-system(b1fc0d1a-112f-45a7-b948-a248c4d1cddc)" because it is outdated
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.226165    1156 topology_manager.go:233] [topologymanager] Topology Admit Handler
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:19.401116    1156 kubelet.go:1650] Deleted mirror pod "etcd-stopped-upgrade-20210817001119-111344_kube-system(563656c4-e5a6-4375-bfdb-593b77d6318a)" because it is outdated
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.498159    1156 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/902057b3-d4d6-4fe5-a221-f8f1d206d1f5-kube-proxy") pod "kube-proxy-zbbt2" (UID: "902057b3-d4d6-4fe5-a221-f8f1d206d1f5")
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.502507    1156 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/902057b3-d4d6-4fe5-a221-f8f1d206d1f5-xtables-lock") pod "kube-proxy-zbbt2" (UID: "902057b3-d4d6-4fe5-a221-f8f1d206d1f5")
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.503358    1156 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/902057b3-d4d6-4fe5-a221-f8f1d206d1f5-lib-modules") pod "kube-proxy-zbbt2" (UID: "902057b3-d4d6-4fe5-a221-f8f1d206d1f5")
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.503833    1156 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-64ntx" (UniqueName: "kubernetes.io/secret/902057b3-d4d6-4fe5-a221-f8f1d206d1f5-kube-proxy-token-64ntx") pod "kube-proxy-zbbt2" (UID: "902057b3-d4d6-4fe5-a221-f8f1d206d1f5")
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.504628    1156 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-68slm" (UniqueName: "kubernetes.io/secret/603f71ef-a5a9-4806-9a61-0c66ccd1a1b3-storage-provisioner-token-68slm") pod "storage-provisioner" (UID: "603f71ef-a5a9-4806-9a61-0c66ccd1a1b3")
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.504780    1156 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fa8ebea5-4c71-46fe-970e-76d34d9e2b2a-config-volume") pod "coredns-66bff467f8-2zqdn" (UID: "fa8ebea5-4c71-46fe-970e-76d34d9e2b2a")
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.504892    1156 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-76l2h" (UniqueName: "kubernetes.io/secret/fa8ebea5-4c71-46fe-970e-76d34d9e2b2a-coredns-token-76l2h") pod "coredns-66bff467f8-2zqdn" (UID: "fa8ebea5-4c71-46fe-970e-76d34d9e2b2a")
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.504986    1156 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/603f71ef-a5a9-4806-9a61-0c66ccd1a1b3-tmp") pod "storage-provisioner" (UID: "603f71ef-a5a9-4806-9a61-0c66ccd1a1b3")
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:19.505061    1156 reconciler.go:157] Reconciler: start to sync state
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:19.813062    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r20ad76afbbcc45108255506e9c26263f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r20ad76afbbcc45108255506e9c26263f.scope: no such file or directory
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:19.813163    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r20ad76afbbcc45108255506e9c26263f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r20ad76afbbcc45108255506e9c26263f.scope: no such file or directory
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:19.980400    1156 kubelet.go:1650] Deleted mirror pod "kube-scheduler-stopped-upgrade-20210817001119-111344_kube-system(c3f24f54-2e24-4e08-8ebf-6269aed25818)" because it is outdated
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:19.981062    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r20ad76afbbcc45108255506e9c26263f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r20ad76afbbcc45108255506e9c26263f.scope: no such file or directory
	Aug 17 00:18:19 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:19.981163    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-r20ad76afbbcc45108255506e9c26263f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-r20ad76afbbcc45108255506e9c26263f.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.087016    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu/system.slice/run-rcf1b0afb9d3f48e8a37c5313cc434d5f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu/system.slice/run-rcf1b0afb9d3f48e8a37c5313cc434d5f.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.114051    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-rcf1b0afb9d3f48e8a37c5313cc434d5f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.116493    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-rcf1b0afb9d3f48e8a37c5313cc434d5f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-rcf1b0afb9d3f48e8a37c5313cc434d5f.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.117209    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-rcf1b0afb9d3f48e8a37c5313cc434d5f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-rcf1b0afb9d3f48e8a37c5313cc434d5f.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.312875    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu/system.slice/run-r3c1d371b45594724bbb01ac916c5a2cd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu/system.slice/run-r3c1d371b45594724bbb01ac916c5a2cd.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.314589    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r3c1d371b45594724bbb01ac916c5a2cd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r3c1d371b45594724bbb01ac916c5a2cd.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.314808    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r3c1d371b45594724bbb01ac916c5a2cd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r3c1d371b45594724bbb01ac916c5a2cd.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.314929    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r3c1d371b45594724bbb01ac916c5a2cd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r3c1d371b45594724bbb01ac916c5a2cd.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.315055    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-r3c1d371b45594724bbb01ac916c5a2cd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-r3c1d371b45594724bbb01ac916c5a2cd.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.315150    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu/system.slice/run-r24584888d7784f20830eed90e7d87d63.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu/system.slice/run-r24584888d7784f20830eed90e7d87d63.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.315341    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r24584888d7784f20830eed90e7d87d63.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r24584888d7784f20830eed90e7d87d63.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.315464    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r24584888d7784f20830eed90e7d87d63.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r24584888d7784f20830eed90e7d87d63.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.315589    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r24584888d7784f20830eed90e7d87d63.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r24584888d7784f20830eed90e7d87d63.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:20.315696    1156 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-r24584888d7784f20830eed90e7d87d63.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-r24584888d7784f20830eed90e7d87d63.scope: no such file or directory
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:20.846650    1156 kubelet.go:1646] Trying to delete pod kube-scheduler-stopped-upgrade-20210817001119-111344_kube-system c3f24f54-2e24-4e08-8ebf-6269aed25818
	Aug 17 00:18:20 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:20.848758    1156 kubelet.go:1646] Trying to delete pod etcd-stopped-upgrade-20210817001119-111344_kube-system 563656c4-e5a6-4375-bfdb-593b77d6318a
	Aug 17 00:18:21 stopped-upgrade-20210817001119-111344 kubelet[1156]: E0817 00:18:21.492935    1156 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Aug 17 00:18:21 stopped-upgrade-20210817001119-111344 kubelet[1156]: E0817 00:18:21.494388    1156 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Aug 17 00:18:23 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:23.402867    1156 reconciler.go:196] operationExecutor.UnmountVolume started for volume "coredns-token-76l2h" (UniqueName: "kubernetes.io/secret/fa8ebea5-4c71-46fe-970e-76d34d9e2b2a-coredns-token-76l2h") pod "fa8ebea5-4c71-46fe-970e-76d34d9e2b2a" (UID: "fa8ebea5-4c71-46fe-970e-76d34d9e2b2a")
	Aug 17 00:18:23 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:23.403354    1156 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fa8ebea5-4c71-46fe-970e-76d34d9e2b2a-config-volume") pod "fa8ebea5-4c71-46fe-970e-76d34d9e2b2a" (UID: "fa8ebea5-4c71-46fe-970e-76d34d9e2b2a")
	Aug 17 00:18:23 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:23.408884    1156 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/fa8ebea5-4c71-46fe-970e-76d34d9e2b2a/volumes/kubernetes.io~configmap/config-volume: ClearQuota called, but quotas disabled
	Aug 17 00:18:23 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:23.409378    1156 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa8ebea5-4c71-46fe-970e-76d34d9e2b2a-config-volume" (OuterVolumeSpecName: "config-volume") pod "fa8ebea5-4c71-46fe-970e-76d34d9e2b2a" (UID: "fa8ebea5-4c71-46fe-970e-76d34d9e2b2a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 17 00:18:23 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:23.456453    1156 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8ebea5-4c71-46fe-970e-76d34d9e2b2a-coredns-token-76l2h" (OuterVolumeSpecName: "coredns-token-76l2h") pod "fa8ebea5-4c71-46fe-970e-76d34d9e2b2a" (UID: "fa8ebea5-4c71-46fe-970e-76d34d9e2b2a"). InnerVolumeSpecName "coredns-token-76l2h". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 00:18:23 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:23.504778    1156 reconciler.go:319] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fa8ebea5-4c71-46fe-970e-76d34d9e2b2a-config-volume") on node "stopped-upgrade-20210817001119-111344" DevicePath ""
	Aug 17 00:18:23 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:23.504820    1156 reconciler.go:319] Volume detached for volume "coredns-token-76l2h" (UniqueName: "kubernetes.io/secret/fa8ebea5-4c71-46fe-970e-76d34d9e2b2a-coredns-token-76l2h") on node "stopped-upgrade-20210817001119-111344" DevicePath ""
	Aug 17 00:18:25 stopped-upgrade-20210817001119-111344 kubelet[1156]: E0817 00:18:25.775081    1156 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-66bff467f8-2zqdn": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:365: sending config to init process caused \"write init-p: broken pipe\"": unknown
	Aug 17 00:18:25 stopped-upgrade-20210817001119-111344 kubelet[1156]: E0817 00:18:25.779713    1156 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-66bff467f8-2zqdn_kube-system(fa8ebea5-4c71-46fe-970e-76d34d9e2b2a)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-66bff467f8-2zqdn": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:365: sending config to init process caused \"write init-p: broken pipe\"": unknown
	Aug 17 00:18:25 stopped-upgrade-20210817001119-111344 kubelet[1156]: E0817 00:18:25.779764    1156 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-66bff467f8-2zqdn_kube-system(fa8ebea5-4c71-46fe-970e-76d34d9e2b2a)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-66bff467f8-2zqdn": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:365: sending config to init process caused \"write init-p: broken pipe\"": unknown
	Aug 17 00:18:25 stopped-upgrade-20210817001119-111344 kubelet[1156]: E0817 00:18:25.779887    1156 pod_workers.go:191] Error syncing pod fa8ebea5-4c71-46fe-970e-76d34d9e2b2a ("coredns-66bff467f8-2zqdn_kube-system(fa8ebea5-4c71-46fe-970e-76d34d9e2b2a)"), skipping: failed to "CreatePodSandbox" for "coredns-66bff467f8-2zqdn_kube-system(fa8ebea5-4c71-46fe-970e-76d34d9e2b2a)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-66bff467f8-2zqdn_kube-system(fa8ebea5-4c71-46fe-970e-76d34d9e2b2a)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"coredns-66bff467f8-2zqdn\": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused \"process_linux.go:365: sending config to init process caused \\\"write init-p: broken pipe\\\"\": unknown"
	Aug 17 00:18:30 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:30.151192    1156 pod_container_deletor.go:77] Container "6cf864874bea07f4b0555fc44209947bc3094e46d4f420f84da165af2e5e64d0" not found in pod's containers
	Aug 17 00:18:30 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:30.191633    1156 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-6d9nh through plugin: invalid network status for
	Aug 17 00:18:30 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:30.193314    1156 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 1cd4a638b1b8a1a918f004b98a19b76e8e598b740f7153677fb06c9d5f7bcf4b
	Aug 17 00:18:30 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:30.583091    1156 pod_container_deletor.go:77] Container "2b7161901350196e7a763c49fc9336a882317aeab1e9009397a3608c8413f5df" not found in pod's containers
	Aug 17 00:18:30 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:30.722752    1156 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0d5269c4aa848ad39fb409a52beda778f89266cf0ef682fdc9528a6fc9e35f96
	Aug 17 00:18:30 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:30.909994    1156 pod_container_deletor.go:77] Container "674aece6797ece38c945398c5e729f4b95e2cce07b1e3aec758610eb0f26d7e6" not found in pod's containers
	Aug 17 00:18:31 stopped-upgrade-20210817001119-111344 kubelet[1156]: I0817 00:18:31.007779    1156 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b1c3fba68b4c7cbdd47310f8fdfd6356d893d58864b29cb81bfe14068b2b3d3d
	Aug 17 00:18:31 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:31.052210    1156 pod_container_deletor.go:77] Container "afe0b19e43e024dca9e588af5f3e258cc95aafca0dd97317280d6e23a41de3fb" not found in pod's containers
	Aug 17 00:18:31 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:31.142768    1156 pod_container_deletor.go:77] Container "9c21464c593b67b0668766b30d0a5858e643119b7750becf14491331d554f242" not found in pod's containers
	Aug 17 00:18:33 stopped-upgrade-20210817001119-111344 kubelet[1156]: W0817 00:18:33.293092    1156 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-6d9nh through plugin: invalid network status for
	
	* 
	* ==> storage-provisioner [2af10de3bff0] <==
	* 
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 00:18:30.984489   70440 logs.go:190] command /bin/bash -c "docker logs --tail 60 1cd4a638b1b8" failed with error: /bin/bash -c "docker logs --tail 60 1cd4a638b1b8": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 1cd4a638b1b8
	 output: "\n** stderr ** \nError: No such container: 1cd4a638b1b8\n\n** /stderr **"
	E0817 00:18:34.035409   70440 logs.go:190] command /bin/bash -c "docker logs --tail 60 b1c3fba68b4c" failed with error: /bin/bash -c "docker logs --tail 60 b1c3fba68b4c": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: b1c3fba68b4c
	 output: "\n** stderr ** \nError: No such container: b1c3fba68b4c\n\n** /stderr **"
	! unable to fetch logs for: kube-controller-manager [1cd4a638b1b8], kube-scheduler [b1c3fba68b4c]

                                                
                                                
** /stderr **
version_upgrade_test.go:210: `minikube logs` after upgrade to HEAD from v1.9.0 failed: exit status 110
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (21.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (734.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20210817002204-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20210817002204-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (12m13.764106s)

                                                
                                                
-- stdout --
	* [calico-20210817002204-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	* Starting control plane node calico-20210817002204-111344 in cluster calico-20210817002204-111344
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 00:40:41.414358   56104 out.go:298] Setting OutFile to fd 3612 ...
	I0817 00:40:41.415474   56104 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:40:41.415620   56104 out.go:311] Setting ErrFile to fd 3916...
	I0817 00:40:41.415620   56104 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:40:41.432169   56104 out.go:305] Setting JSON to false
	I0817 00:40:41.439424   56104 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8368888,"bootTime":1620791953,"procs":147,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0817 00:40:41.439631   56104 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0817 00:40:41.442193   56104 out.go:177] * [calico-20210817002204-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0817 00:40:41.442796   56104 notify.go:169] Checking for updates...
	I0817 00:40:41.447735   56104 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:40:41.450745   56104 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0817 00:40:41.452353   56104 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 00:40:41.454384   56104 config.go:177] Loaded profile config "cilium-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:40:41.454799   56104 config.go:177] Loaded profile config "false-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:40:41.455165   56104 config.go:177] Loaded profile config "newest-cni-20210817003608-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.0-rc.0
	I0817 00:40:41.455165   56104 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 00:40:43.366327   56104 docker.go:132] docker version: linux-20.10.2
	I0817 00:40:43.377036   56104 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:40:44.171755   56104 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:40:43.826812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] C
lientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:40:44.174891   56104 out.go:177] * Using the docker driver based on user configuration
	I0817 00:40:44.175031   56104 start.go:278] selected driver: docker
	I0817 00:40:44.175137   56104 start.go:751] validating driver "docker" against <nil>
	I0817 00:40:44.175290   56104 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 00:40:44.259149   56104 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:40:45.076914   56104 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:40:44.7050463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:40:45.077307   56104 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 00:40:45.077825   56104 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 00:40:45.078010   56104 cni.go:93] Creating CNI manager for "calico"
	I0817 00:40:45.078135   56104 start_flags.go:272] Found "Calico" CNI - setting NetworkPlugin=cni
	I0817 00:40:45.078135   56104 start_flags.go:277] config:
	{Name:calico-20210817002204-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210817002204-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:40:45.081062   56104 out.go:177] * Starting control plane node calico-20210817002204-111344 in cluster calico-20210817002204-111344
	I0817 00:40:45.081158   56104 cache.go:117] Beginning downloading kic base image for docker with docker
	I0817 00:40:45.083845   56104 out.go:177] * Pulling base image ...
	I0817 00:40:45.083845   56104 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:40:45.083845   56104 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0817 00:40:45.083845   56104 cache.go:56] Caching tarball of preloaded images
	I0817 00:40:45.084674   56104 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 00:40:45.084953   56104 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0817 00:40:45.085332   56104 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0817 00:40:45.085789   56104 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\config.json ...
	I0817 00:40:45.086138   56104 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\config.json: {Name:mkf5784d74b794febe61fb0fa966d33e1cb93b8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:40:45.588839   56104 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 00:40:45.588839   56104 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 00:40:45.589280   56104 cache.go:205] Successfully downloaded all kic artifacts
	I0817 00:40:45.589462   56104 start.go:313] acquiring machines lock for calico-20210817002204-111344: {Name:mkf209d4b0d1f9a7855c2dc1b5a4f0c6775a6caf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:40:45.589836   56104 start.go:317] acquired machines lock for "calico-20210817002204-111344" in 233.5µs
	I0817 00:40:45.590094   56104 start.go:89] Provisioning new machine with config: &{Name:calico-20210817002204-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210817002204-111344 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 00:40:45.590192   56104 start.go:126] createHost starting for "" (driver="docker")
	I0817 00:40:45.594084   56104 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0817 00:40:45.594760   56104 start.go:160] libmachine.API.Create for "calico-20210817002204-111344" (driver="docker")
	I0817 00:40:45.594952   56104 client.go:168] LocalClient.Create starting
	I0817 00:40:45.595415   56104 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0817 00:40:45.595709   56104 main.go:130] libmachine: Decoding PEM data...
	I0817 00:40:45.595782   56104 main.go:130] libmachine: Parsing certificate...
	I0817 00:40:45.596141   56104 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0817 00:40:45.596325   56104 main.go:130] libmachine: Decoding PEM data...
	I0817 00:40:45.596416   56104 main.go:130] libmachine: Parsing certificate...
	I0817 00:40:45.605457   56104 cli_runner.go:115] Run: docker network inspect calico-20210817002204-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 00:40:46.082060   56104 cli_runner.go:162] docker network inspect calico-20210817002204-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 00:40:46.088220   56104 network_create.go:255] running [docker network inspect calico-20210817002204-111344] to gather additional debugging logs...
	I0817 00:40:46.088220   56104 cli_runner.go:115] Run: docker network inspect calico-20210817002204-111344
	W0817 00:40:46.565366   56104 cli_runner.go:162] docker network inspect calico-20210817002204-111344 returned with exit code 1
	I0817 00:40:46.565558   56104 network_create.go:258] error running [docker network inspect calico-20210817002204-111344]: docker network inspect calico-20210817002204-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20210817002204-111344
	I0817 00:40:46.565558   56104 network_create.go:260] output of [docker network inspect calico-20210817002204-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20210817002204-111344
	
	** /stderr **
	I0817 00:40:46.580130   56104 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 00:40:47.083971   56104 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003e2a20] misses:0}
	I0817 00:40:47.084572   56104 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:40:47.084572   56104 network_create.go:106] attempt to create docker network calico-20210817002204-111344 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 00:40:47.093836   56104 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210817002204-111344
	I0817 00:40:47.794907   56104 network_create.go:90] docker network calico-20210817002204-111344 192.168.49.0/24 created
	I0817 00:40:47.795394   56104 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20210817002204-111344" container
	I0817 00:40:47.815063   56104 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 00:40:48.303872   56104 cli_runner.go:115] Run: docker volume create calico-20210817002204-111344 --label name.minikube.sigs.k8s.io=calico-20210817002204-111344 --label created_by.minikube.sigs.k8s.io=true
	I0817 00:40:48.780165   56104 oci.go:102] Successfully created a docker volume calico-20210817002204-111344
	I0817 00:40:48.789739   56104 cli_runner.go:115] Run: docker run --rm --name calico-20210817002204-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210817002204-111344 --entrypoint /usr/bin/test -v calico-20210817002204-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 00:40:51.849154   56104 cli_runner.go:168] Completed: docker run --rm --name calico-20210817002204-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210817002204-111344 --entrypoint /usr/bin/test -v calico-20210817002204-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (3.0591001s)
	I0817 00:40:51.849288   56104 oci.go:106] Successfully prepared a docker volume calico-20210817002204-111344
	I0817 00:40:51.849417   56104 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:40:51.849556   56104 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 00:40:51.857899   56104 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210817002204-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 00:40:51.858258   56104 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W0817 00:40:52.467924   56104 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210817002204-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0817 00:40:52.467924   56104 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210817002204-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ������������������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	����   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	��������8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>�����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0817 00:40:52.765911   56104 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:40:52.366976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] C
lientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:40:52.773893   56104 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 00:40:53.660284   56104 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210817002204-111344 --name calico-20210817002204-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210817002204-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210817002204-111344 --network calico-20210817002204-111344 --ip 192.168.49.2 --volume calico-20210817002204-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 00:40:57.061706   56104 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210817002204-111344 --name calico-20210817002204-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210817002204-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210817002204-111344 --network calico-20210817002204-111344 --ip 192.168.49.2 --volume calico-20210817002204-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: (3.4012926s)
	I0817 00:40:57.069461   56104 cli_runner.go:115] Run: docker container inspect calico-20210817002204-111344 --format={{.State.Running}}
	I0817 00:40:57.646582   56104 cli_runner.go:115] Run: docker container inspect calico-20210817002204-111344 --format={{.State.Status}}
	I0817 00:40:58.179381   56104 cli_runner.go:115] Run: docker exec calico-20210817002204-111344 stat /var/lib/dpkg/alternatives/iptables
	I0817 00:40:59.068671   56104 oci.go:278] the created container "calico-20210817002204-111344" has a running status.
	I0817 00:40:59.068958   56104 kic.go:210] Creating ssh key for kic: C:\Users\jenkins\minikube-integration\.minikube\machines\calico-20210817002204-111344\id_rsa...
	I0817 00:40:59.171291   56104 kic_runner.go:188] docker (temp): C:\Users\jenkins\minikube-integration\.minikube\machines\calico-20210817002204-111344\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 00:41:03.433593   56104 cli_runner.go:115] Run: docker container inspect calico-20210817002204-111344 --format={{.State.Status}}
	I0817 00:41:03.976878   56104 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 00:41:03.977944   56104 kic_runner.go:115] Args: [docker exec --privileged calico-20210817002204-111344 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 00:41:04.721800   56104 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins\minikube-integration\.minikube\machines\calico-20210817002204-111344\id_rsa...
	I0817 00:41:05.357013   56104 cli_runner.go:115] Run: docker container inspect calico-20210817002204-111344 --format={{.State.Status}}
	I0817 00:41:05.844915   56104 machine.go:88] provisioning docker machine ...
	I0817 00:41:05.844915   56104 ubuntu.go:169] provisioning hostname "calico-20210817002204-111344"
	I0817 00:41:05.855224   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:06.364989   56104 main.go:130] libmachine: Using SSH client type: native
	I0817 00:41:06.374468   56104 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55228 <nil> <nil>}
	I0817 00:41:06.374468   56104 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20210817002204-111344 && echo "calico-20210817002204-111344" | sudo tee /etc/hostname
	I0817 00:41:06.679717   56104 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20210817002204-111344
	
	I0817 00:41:06.692613   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:07.198290   56104 main.go:130] libmachine: Using SSH client type: native
	I0817 00:41:07.198290   56104 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55228 <nil> <nil>}
	I0817 00:41:07.198290   56104 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210817002204-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210817002204-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210817002204-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 00:41:07.503108   56104 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:41:07.503207   56104 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0817 00:41:07.503207   56104 ubuntu.go:177] setting up certificates
	I0817 00:41:07.503207   56104 provision.go:83] configureAuth start
	I0817 00:41:07.515595   56104 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210817002204-111344
	I0817 00:41:08.024514   56104 provision.go:138] copyHostCerts
	I0817 00:41:08.024780   56104 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0817 00:41:08.024780   56104 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0817 00:41:08.025289   56104 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0817 00:41:08.026734   56104 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0817 00:41:08.026734   56104 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0817 00:41:08.027242   56104 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0817 00:41:08.028686   56104 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0817 00:41:08.028686   56104 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0817 00:41:08.028950   56104 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0817 00:41:08.030158   56104 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20210817002204-111344 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210817002204-111344]
	I0817 00:41:08.645555   56104 provision.go:172] copyRemoteCerts
	I0817 00:41:08.652522   56104 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 00:41:08.658522   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:09.205478   56104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55228 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\calico-20210817002204-111344\id_rsa Username:docker}
	I0817 00:41:09.408738   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 00:41:09.506906   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 00:41:09.598350   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 00:41:09.687089   56104 provision.go:86] duration metric: configureAuth took 2.1837997s
	I0817 00:41:09.687234   56104 ubuntu.go:193] setting minikube options for container-runtime
	I0817 00:41:09.687473   56104 config.go:177] Loaded profile config "calico-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:41:09.696587   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:10.178786   56104 main.go:130] libmachine: Using SSH client type: native
	I0817 00:41:10.179337   56104 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55228 <nil> <nil>}
	I0817 00:41:10.179337   56104 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0817 00:41:10.441281   56104 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0817 00:41:10.441281   56104 ubuntu.go:71] root file system type: overlay
	I0817 00:41:10.441847   56104 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0817 00:41:10.453927   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:10.964294   56104 main.go:130] libmachine: Using SSH client type: native
	I0817 00:41:10.964495   56104 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55228 <nil> <nil>}
	I0817 00:41:10.964495   56104 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0817 00:41:11.316541   56104 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0817 00:41:11.326041   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:11.805673   56104 main.go:130] libmachine: Using SSH client type: native
	I0817 00:41:11.806204   56104 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55228 <nil> <nil>}
	I0817 00:41:11.806204   56104 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:41:20.549697   56104 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-17 00:41:11.305944000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0817 00:41:20.549697   56104 machine.go:91] provisioned docker machine in 14.7042236s
	I0817 00:41:20.549874   56104 client.go:171] LocalClient.Create took 34.9535939s
	I0817 00:41:20.549874   56104 start.go:168] duration metric: libmachine.API.Create for "calico-20210817002204-111344" took 34.9537862s
	I0817 00:41:20.549874   56104 start.go:267] post-start starting for "calico-20210817002204-111344" (driver="docker")
	I0817 00:41:20.550010   56104 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:41:20.563428   56104 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:41:20.568347   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:21.072964   56104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55228 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\calico-20210817002204-111344\id_rsa Username:docker}
	I0817 00:41:21.320117   56104 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:41:21.348703   56104 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:41:21.348912   56104 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:41:21.348912   56104 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:41:21.348912   56104 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 00:41:21.349017   56104 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:41:21.349373   56104 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:41:21.350263   56104 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:41:21.362416   56104 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:41:21.419087   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:41:21.527990   56104 start.go:270] post-start completed in 977.9421ms
	I0817 00:41:21.540933   56104 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210817002204-111344
	I0817 00:41:22.027428   56104 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\config.json ...
	I0817 00:41:22.045947   56104 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:41:22.051535   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:22.551583   56104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55228 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\calico-20210817002204-111344\id_rsa Username:docker}
	I0817 00:41:22.720547   56104 start.go:129] duration metric: createHost completed in 37.1289442s
	I0817 00:41:22.720547   56104 start.go:80] releasing machines lock for "calico-20210817002204-111344", held for 37.1291685s
	I0817 00:41:22.726546   56104 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210817002204-111344
	I0817 00:41:23.243144   56104 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 00:41:23.250156   56104 ssh_runner.go:149] Run: systemctl --version
	I0817 00:41:23.257223   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:23.258224   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:23.784962   56104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55228 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\calico-20210817002204-111344\id_rsa Username:docker}
	I0817 00:41:23.799096   56104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55228 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\calico-20210817002204-111344\id_rsa Username:docker}
	I0817 00:41:24.138981   56104 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0817 00:41:24.212049   56104 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:41:24.269331   56104 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0817 00:41:24.278285   56104 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 00:41:24.340665   56104 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 00:41:24.454712   56104 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0817 00:41:24.878426   56104 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0817 00:41:25.213640   56104 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:41:25.314743   56104 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:41:25.700034   56104 ssh_runner.go:149] Run: sudo systemctl start docker
	I0817 00:41:25.828465   56104 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:41:26.186372   56104 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:41:26.423470   56104 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	I0817 00:41:26.429333   56104 cli_runner.go:115] Run: docker exec -t calico-20210817002204-111344 dig +short host.docker.internal
	I0817 00:41:27.421343   56104 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0817 00:41:27.431349   56104 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0817 00:41:27.462146   56104 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:41:27.550569   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:41:28.090499   56104 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:41:28.104082   56104 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:41:28.342559   56104 docker.go:535] Got preloaded images: 
	I0817 00:41:28.342559   56104 docker.go:541] k8s.gcr.io/kube-apiserver:v1.21.3 wasn't preloaded
	I0817 00:41:28.350372   56104 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:41:28.414306   56104 ssh_runner.go:149] Run: which lz4
	I0817 00:41:28.483388   56104 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0817 00:41:28.514245   56104 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0817 00:41:28.514581   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (504826016 bytes)
	I0817 00:42:33.245250   56104 docker.go:500] Took 64.781357 seconds to copy over tarball
	I0817 00:42:33.258278   56104 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 00:42:57.634378   56104 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (24.3751736s)
	I0817 00:42:57.634623   56104 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0817 00:42:58.112683   56104 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:42:58.146140   56104 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3152 bytes)
	I0817 00:42:58.241291   56104 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:42:58.651854   56104 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0817 00:43:07.702545   56104 ssh_runner.go:189] Completed: sudo systemctl restart docker: (9.0503466s)
	I0817 00:43:07.713921   56104 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:43:07.941442   56104 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0817 00:43:07.941710   56104 cache_images.go:74] Images are preloaded, skipping loading
	I0817 00:43:07.958376   56104 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0817 00:43:08.460977   56104 cni.go:93] Creating CNI manager for "calico"
	I0817 00:43:08.461451   56104 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 00:43:08.461984   56104 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210817002204-111344 NodeName:calico-20210817002204-111344 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 00:43:08.464579   56104 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20210817002204-111344"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 00:43:08.466102   56104 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20210817002204-111344 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:calico-20210817002204-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0817 00:43:08.475576   56104 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 00:43:08.512257   56104 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 00:43:08.520558   56104 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 00:43:08.549475   56104 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0817 00:43:08.599929   56104 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 00:43:08.668097   56104 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I0817 00:43:08.748327   56104 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 00:43:08.766321   56104 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:43:08.806387   56104 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344 for IP: 192.168.49.2
	I0817 00:43:08.807097   56104 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0817 00:43:08.807573   56104 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0817 00:43:08.808261   56104 certs.go:297] generating minikube-user signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\client.key
	I0817 00:43:08.808261   56104 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\client.crt with IP's: []
	I0817 00:43:08.949247   56104 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\client.crt ...
	I0817 00:43:08.949247   56104 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\client.crt: {Name:mk955276f37c3d42aefdff570ccf82fba27efd20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:08.951234   56104 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\client.key ...
	I0817 00:43:08.951234   56104 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\client.key: {Name:mk632c97b384def59e5d4fbd95b6aa4069a1e3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:08.953248   56104 certs.go:297] generating minikube signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.key.dd3b5fb2
	I0817 00:43:08.953248   56104 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 00:43:09.191367   56104 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.crt.dd3b5fb2 ...
	I0817 00:43:09.191367   56104 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.crt.dd3b5fb2: {Name:mka1c31188217024e994a1ef092625b0fb1cc469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:09.193115   56104 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.key.dd3b5fb2 ...
	I0817 00:43:09.193115   56104 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.key.dd3b5fb2: {Name:mk889b6c0e2fabd695e146c18891a3bfebe368e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:09.194888   56104 certs.go:308] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.crt
	I0817 00:43:09.208050   56104 certs.go:312] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.key.dd3b5fb2 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.key
	I0817 00:43:09.209755   56104 certs.go:297] generating aggregator signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\proxy-client.key
	I0817 00:43:09.209755   56104 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\proxy-client.crt with IP's: []
	I0817 00:43:09.628576   56104 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\proxy-client.crt ...
	I0817 00:43:09.628576   56104 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\proxy-client.crt: {Name:mk27cb741cf742e300334f74b676c0ea5301f7a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:09.631130   56104 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\proxy-client.key ...
	I0817 00:43:09.631130   56104 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\proxy-client.key: {Name:mkeb6cb5f69001b3b046fb1982272e22ebf702cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:09.639951   56104 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem (1338 bytes)
	W0817 00:43:09.639951   56104 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344_empty.pem, impossibly tiny 0 bytes
	I0817 00:43:09.639951   56104 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0817 00:43:09.641230   56104 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0817 00:43:09.641592   56104 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0817 00:43:09.641978   56104 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0817 00:43:09.642577   56104 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem (1708 bytes)
	I0817 00:43:09.645045   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 00:43:09.738664   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 00:43:09.807707   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 00:43:09.884342   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\calico-20210817002204-111344\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 00:43:09.963527   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 00:43:10.039453   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 00:43:10.130622   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 00:43:10.206833   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 00:43:10.284686   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 00:43:10.362919   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem --> /usr/share/ca-certificates/111344.pem (1338 bytes)
	I0817 00:43:10.437152   56104 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /usr/share/ca-certificates/1113442.pem (1708 bytes)
	I0817 00:43:10.525983   56104 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 00:43:10.599163   56104 ssh_runner.go:149] Run: openssl version
	I0817 00:43:10.629151   56104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 00:43:10.678585   56104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:43:10.696752   56104 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:43:10.703758   56104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:43:10.737172   56104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 00:43:10.798297   56104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111344.pem && ln -fs /usr/share/ca-certificates/111344.pem /etc/ssl/certs/111344.pem"
	I0817 00:43:10.843299   56104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/111344.pem
	I0817 00:43:10.862532   56104 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 23:23 /usr/share/ca-certificates/111344.pem
	I0817 00:43:10.869286   56104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111344.pem
	I0817 00:43:10.898221   56104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/111344.pem /etc/ssl/certs/51391683.0"
	I0817 00:43:10.933236   56104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1113442.pem && ln -fs /usr/share/ca-certificates/1113442.pem /etc/ssl/certs/1113442.pem"
	I0817 00:43:10.975421   56104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1113442.pem
	I0817 00:43:10.994881   56104 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 23:23 /usr/share/ca-certificates/1113442.pem
	I0817 00:43:11.001908   56104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1113442.pem
	I0817 00:43:11.050879   56104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1113442.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 00:43:11.083949   56104 kubeadm.go:390] StartCluster: {Name:calico-20210817002204-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210817002204-111344 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:43:11.094879   56104 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:43:11.236058   56104 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 00:43:11.299012   56104 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 00:43:11.331581   56104 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 00:43:11.344424   56104 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 00:43:11.389343   56104 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 00:43:11.389482   56104 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 00:44:10.258976   56104 out.go:204]   - Generating certificates and keys ...
	I0817 00:44:10.279788   56104 out.go:204]   - Booting up control plane ...
	I0817 00:44:10.283702   56104 out.go:204]   - Configuring RBAC rules ...
	I0817 00:44:10.288624   56104 cni.go:93] Creating CNI manager for "calico"
	I0817 00:44:10.291624   56104 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0817 00:44:10.291624   56104 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 00:44:10.292624   56104 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0817 00:44:10.780322   56104 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 00:44:30.599715   56104 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (19.8186139s)
	I0817 00:44:30.599715   56104 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 00:44:30.615746   56104 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=calico-20210817002204-111344 minikube.k8s.io/updated_at=2021_08_17T00_44_30_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:44:30.618021   56104 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:44:33.431811   56104 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (2.831989s)
	I0817 00:44:33.432052   56104 ops.go:34] apiserver oom_adj: -16
	I0817 00:44:41.709883   56104 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=calico-20210817002204-111344 minikube.k8s.io/updated_at=2021_08_17T00_44_30_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (11.0934786s)
	I0817 00:44:41.709883   56104 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (11.0914402s)
	I0817 00:44:41.721755   56104 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:44:45.805048   56104 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.0831377s)
	I0817 00:44:45.805302   56104 kubeadm.go:985] duration metric: took 15.2050098s to wait for elevateKubeSystemPrivileges.
	I0817 00:44:45.805302   56104 kubeadm.go:392] StartCluster complete in 1m34.7177284s
	I0817 00:44:45.805538   56104 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:44:45.805958   56104 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:44:45.818564   56104 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:44:46.583617   56104 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20210817002204-111344" rescaled to 1
	I0817 00:44:46.583839   56104 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 00:44:46.586112   56104 out.go:177] * Verifying Kubernetes components...
	I0817 00:44:46.584040   56104 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 00:44:46.584040   56104 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 00:44:46.585244   56104 config.go:177] Loaded profile config "calico-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:44:46.586564   56104 addons.go:59] Setting default-storageclass=true in profile "calico-20210817002204-111344"
	I0817 00:44:46.586564   56104 addons.go:59] Setting storage-provisioner=true in profile "calico-20210817002204-111344"
	I0817 00:44:46.586824   56104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20210817002204-111344"
	I0817 00:44:46.586824   56104 addons.go:135] Setting addon storage-provisioner=true in "calico-20210817002204-111344"
	W0817 00:44:46.586824   56104 addons.go:147] addon storage-provisioner should already be in state true
	I0817 00:44:46.587015   56104 host.go:66] Checking if "calico-20210817002204-111344" exists ...
	I0817 00:44:46.604790   56104 cli_runner.go:115] Run: docker container inspect calico-20210817002204-111344 --format={{.State.Status}}
	I0817 00:44:46.605322   56104 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 00:44:46.607717   56104 cli_runner.go:115] Run: docker container inspect calico-20210817002204-111344 --format={{.State.Status}}
	I0817 00:44:47.145460   56104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:44:47.145460   56104 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:44:47.145460   56104 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 00:44:47.152820   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:44:47.204041   56104 addons.go:135] Setting addon default-storageclass=true in "calico-20210817002204-111344"
	W0817 00:44:47.204041   56104 addons.go:147] addon default-storageclass should already be in state true
	I0817 00:44:47.204241   56104 host.go:66] Checking if "calico-20210817002204-111344" exists ...
	I0817 00:44:47.226641   56104 cli_runner.go:115] Run: docker container inspect calico-20210817002204-111344 --format={{.State.Status}}
	I0817 00:44:47.711252   56104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55228 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\calico-20210817002204-111344\id_rsa Username:docker}
	I0817 00:44:47.753200   56104 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 00:44:47.753200   56104 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 00:44:47.759860   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:44:48.313571   56104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55228 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\calico-20210817002204-111344\id_rsa Username:docker}
	I0817 00:44:52.816219   56104 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:44:53.375525   56104 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 00:44:53.719369   56104 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (7.1327854s)
	I0817 00:44:53.719665   56104 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (7.1139286s)
	I0817 00:44:53.719665   56104 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 00:44:53.728105   56104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20210817002204-111344
	I0817 00:44:54.310478   56104 node_ready.go:35] waiting up to 5m0s for node "calico-20210817002204-111344" to be "Ready" ...
	I0817 00:44:54.339506   56104 node_ready.go:49] node "calico-20210817002204-111344" has status "Ready":"True"
	I0817 00:44:54.339619   56104 node_ready.go:38] duration metric: took 29.0276ms waiting for node "calico-20210817002204-111344" to be "Ready" ...
	I0817 00:44:54.339619   56104 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 00:44:54.400702   56104 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace to be "Ready" ...
	I0817 00:44:56.502621   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:58.584861   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:00.629719   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:03.045214   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:05.053710   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:07.528277   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:10.023385   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:12.497540   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:14.529769   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:16.569353   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:16.902977   56104 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (24.0857049s)
	I0817 00:45:16.902977   56104 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (23.5265577s)
	I0817 00:45:16.904171   56104 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 00:45:16.904171   56104 addons.go:344] enableAddons completed in 30.3189791s
	I0817 00:45:17.505862   56104 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (23.7852929s)
	I0817 00:45:17.505862   56104 start.go:728] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0817 00:45:19.037374   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:21.498475   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:24.005799   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:26.497938   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:28.525322   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:31.001669   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:33.011273   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:35.273595   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:37.482575   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:39.492424   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:41.516323   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:43.984692   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:45.993241   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:48.020256   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:50.536856   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:53.012282   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:55.492296   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:57.540287   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:00.038186   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:02.518845   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:04.519344   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:07.040502   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:09.527921   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:11.998831   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:14.524593   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:17.008876   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:19.479603   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:21.526574   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:24.004490   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:26.648679   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:29.012238   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:31.490316   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:33.504774   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:35.523525   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:37.990935   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:40.486594   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:42.996270   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:45.010647   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:47.286051   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:49.518431   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:51.527655   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:53.532789   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:55.996605   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:58.020700   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:00.026526   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:02.517865   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:04.986466   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:07.059721   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:09.533454   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:12.007117   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:14.014736   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:16.513999   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:18.573976   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:21.013036   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:23.036479   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:25.507501   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:28.005629   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:30.495498   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:33.013319   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:35.526865   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:38.141181   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:41.700456   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:44.023765   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:46.026459   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:48.722168   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:51.048717   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:53.534135   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:56.012932   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:58.027684   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:00.496186   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:02.500146   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:04.507388   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:06.537823   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:09.015970   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:11.019776   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:13.517355   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:15.538437   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:18.014400   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:20.090272   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:22.494558   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:25.002179   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:27.505803   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:29.515353   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:31.521559   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:34.010357   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:36.504313   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:38.998421   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:41.015448   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:43.030991   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:45.504138   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:47.522699   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:49.572662   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:51.999311   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:54.015814   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:54.637182   56104 pod_ready.go:81] duration metric: took 4m0.2273508s waiting for pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace to be "Ready" ...
	E0817 00:48:54.637182   56104 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0817 00:48:54.637182   56104 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-csnhz" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:56.739735   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:58.781370   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:01.183059   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:03.274714   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:05.292175   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:07.296392   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:09.754592   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:12.294724   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:14.862978   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:17.257781   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:19.260662   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:21.328303   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:23.775150   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:26.308030   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:28.761776   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:31.250585   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:33.252571   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:35.274261   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:37.306531   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:39.790702   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:41.836395   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:44.243855   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:46.264652   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:48.820152   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:51.144928   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:53.257338   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:55.268666   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:57.279287   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:59.740024   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:01.948409   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:04.258593   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:06.736387   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:09.263663   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:11.742333   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:14.251907   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:16.256890   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:18.276250   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:20.759389   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:22.806260   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:25.322277   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:27.753585   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:30.231618   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:32.246260   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:34.278075   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:36.748864   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:38.751245   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:40.757519   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:42.787913   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:45.270139   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:47.749432   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:49.756809   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:52.247660   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:54.593082   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:56.734722   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:58.748018   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:00.793026   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:03.294325   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:05.745088   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:07.748449   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:09.760393   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:12.243312   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:14.262882   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:16.745071   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:18.759797   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:21.278405   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:23.291839   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:25.348988   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:27.738960   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:29.793126   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:31.804706   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:34.260194   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:36.775538   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:39.271935   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:41.778546   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:43.813986   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:46.288852   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:48.746607   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:51.251607   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:53.736563   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:55.890663   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:58.242791   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:00.247170   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:02.750102   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:04.756723   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:07.238571   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:09.254612   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:11.766681   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:14.237995   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:16.238306   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:18.241316   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:20.254598   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:22.255095   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:24.258314   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:26.460233   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:28.765028   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:31.234375   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:33.239126   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:35.241401   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:37.254850   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:39.314240   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:41.760061   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:44.245527   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:46.251599   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:48.252366   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:50.280539   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:52.745224   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:54.746970   56104 pod_ready.go:102] pod "calico-node-csnhz" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:54.838561   56104 pod_ready.go:81] duration metric: took 4m0.1922012s waiting for pod "calico-node-csnhz" in "kube-system" namespace to be "Ready" ...
	E0817 00:52:54.838561   56104 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0817 00:52:54.838703   56104 pod_ready.go:38] duration metric: took 8m0.4807742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 00:52:54.848467   56104 out.go:177] 
	W0817 00:52:54.849149   56104 out.go:242] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0817 00:52:54.849286   56104 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0817 00:52:54.855727   56104 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - C:\Users\jenkins\minikube-integration\.minikube\logs\lastStart.txt    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - C:\Users\jenkins\minikube-integration\.minikube\logs\lastStart.txt    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0817 00:52:54.859141   56104 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (734.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (688.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-weave-20210817002204-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker
E0817 00:41:09.708756  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.
E0817 00:41:23.499474  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:41:50.671690  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.
E0817 00:41:51.199045  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:42:01.112687  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0817 00:42:22.789682  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:42:50.506929  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:43:12.597549  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.
E0817 00:44:09.184611  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p custom-weave-20210817002204-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker: exit status 105 (11m28.2116822s)

                                                
                                                
-- stdout --
	* [custom-weave-20210817002204-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	* Starting control plane node custom-weave-20210817002204-111344 in cluster custom-weave-20210817002204-111344
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata\weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 00:41:09.263387   59296 out.go:298] Setting OutFile to fd 4004 ...
	I0817 00:41:09.264830   59296 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:41:09.264830   59296 out.go:311] Setting ErrFile to fd 1624...
	I0817 00:41:09.264979   59296 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:41:09.287772   59296 out.go:305] Setting JSON to false
	I0817 00:41:09.294086   59296 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8368916,"bootTime":1620791953,"procs":146,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0817 00:41:09.294313   59296 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0817 00:41:09.297951   59296 out.go:177] * [custom-weave-20210817002204-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0817 00:41:09.297951   59296 notify.go:169] Checking for updates...
	I0817 00:41:09.300363   59296 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:41:09.302065   59296 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0817 00:41:09.303724   59296 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 00:41:09.305747   59296 config.go:177] Loaded profile config "calico-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:41:09.306279   59296 config.go:177] Loaded profile config "cilium-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:41:09.307065   59296 config.go:177] Loaded profile config "newest-cni-20210817003608-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.0-rc.0
	I0817 00:41:09.307195   59296 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 00:41:11.106075   59296 docker.go:132] docker version: linux-20.10.2
	I0817 00:41:11.116614   59296 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:41:11.946728   59296 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:41:11.5521924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:41:11.949452   59296 out.go:177] * Using the docker driver based on user configuration
	I0817 00:41:11.949654   59296 start.go:278] selected driver: docker
	I0817 00:41:11.952777   59296 start.go:751] validating driver "docker" against <nil>
	I0817 00:41:11.952777   59296 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 00:41:12.038802   59296 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:41:12.889639   59296 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:41:12.5208658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:41:12.889899   59296 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 00:41:12.890441   59296 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 00:41:12.890441   59296 cni.go:93] Creating CNI manager for "testdata\\weavenet.yaml"
	I0817 00:41:12.890832   59296 start_flags.go:272] Found "testdata\\weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0817 00:41:12.890832   59296 start_flags.go:277] config:
	{Name:custom-weave-20210817002204-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210817002204-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:41:12.893457   59296 out.go:177] * Starting control plane node custom-weave-20210817002204-111344 in cluster custom-weave-20210817002204-111344
	I0817 00:41:12.893655   59296 cache.go:117] Beginning downloading kic base image for docker with docker
	I0817 00:41:12.895520   59296 out.go:177] * Pulling base image ...
	I0817 00:41:12.895952   59296 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:41:12.896104   59296 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 00:41:12.896217   59296 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0817 00:41:12.896217   59296 cache.go:56] Caching tarball of preloaded images
	I0817 00:41:12.896668   59296 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0817 00:41:12.897131   59296 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0817 00:41:12.897509   59296 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\config.json ...
	I0817 00:41:12.897750   59296 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\config.json: {Name:mk769ffcae8ca768058b084632ee9d024695e9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:41:13.387595   59296 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 00:41:13.387595   59296 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 00:41:13.387595   59296 cache.go:205] Successfully downloaded all kic artifacts
	I0817 00:41:13.387954   59296 start.go:313] acquiring machines lock for custom-weave-20210817002204-111344: {Name:mk87ba722fb32afac92ba3ff03d3585b830c524c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:41:13.388190   59296 start.go:317] acquired machines lock for "custom-weave-20210817002204-111344" in 235.2µs
	I0817 00:41:13.388508   59296 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20210817002204-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210817002204-111344 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 00:41:13.388725   59296 start.go:126] createHost starting for "" (driver="docker")
	I0817 00:41:13.391093   59296 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0817 00:41:13.391619   59296 start.go:160] libmachine.API.Create for "custom-weave-20210817002204-111344" (driver="docker")
	I0817 00:41:13.391887   59296 client.go:168] LocalClient.Create starting
	I0817 00:41:13.392517   59296 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0817 00:41:13.392517   59296 main.go:130] libmachine: Decoding PEM data...
	I0817 00:41:13.392785   59296 main.go:130] libmachine: Parsing certificate...
	I0817 00:41:13.392959   59296 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0817 00:41:13.393141   59296 main.go:130] libmachine: Decoding PEM data...
	I0817 00:41:13.393141   59296 main.go:130] libmachine: Parsing certificate...
	I0817 00:41:13.402874   59296 cli_runner.go:115] Run: docker network inspect custom-weave-20210817002204-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 00:41:13.923199   59296 cli_runner.go:162] docker network inspect custom-weave-20210817002204-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 00:41:13.932459   59296 network_create.go:255] running [docker network inspect custom-weave-20210817002204-111344] to gather additional debugging logs...
	I0817 00:41:13.932672   59296 cli_runner.go:115] Run: docker network inspect custom-weave-20210817002204-111344
	W0817 00:41:14.431162   59296 cli_runner.go:162] docker network inspect custom-weave-20210817002204-111344 returned with exit code 1
	I0817 00:41:14.431162   59296 network_create.go:258] error running [docker network inspect custom-weave-20210817002204-111344]: docker network inspect custom-weave-20210817002204-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20210817002204-111344
	I0817 00:41:14.431162   59296 network_create.go:260] output of [docker network inspect custom-weave-20210817002204-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20210817002204-111344
	
	** /stderr **
	I0817 00:41:14.438480   59296 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 00:41:14.955626   59296 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000065e0] misses:0}
	I0817 00:41:14.955626   59296 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:41:14.955626   59296 network_create.go:106] attempt to create docker network custom-weave-20210817002204-111344 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 00:41:14.961292   59296 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210817002204-111344
	W0817 00:41:15.491565   59296 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210817002204-111344 returned with exit code 1
	W0817 00:41:15.491565   59296 network_create.go:98] failed to create docker network custom-weave-20210817002204-111344 192.168.49.0/24, will retry: subnet is taken
	I0817 00:41:15.511823   59296 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000065e0] amended:false}} dirty:map[] misses:0}
	I0817 00:41:15.512011   59296 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:41:15.527849   59296 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000065e0] amended:true}} dirty:map[192.168.49.0:0xc0000065e0 192.168.58.0:0xc00058e360] misses:0}
	I0817 00:41:15.527849   59296 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:41:15.527849   59296 network_create.go:106] attempt to create docker network custom-weave-20210817002204-111344 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0817 00:41:15.534064   59296 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210817002204-111344
	I0817 00:41:16.972212   59296 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210817002204-111344: (1.4379989s)
	I0817 00:41:16.972275   59296 network_create.go:90] docker network custom-weave-20210817002204-111344 192.168.58.0/24 created
	I0817 00:41:16.972275   59296 kic.go:106] calculated static IP "192.168.58.2" for the "custom-weave-20210817002204-111344" container
	I0817 00:41:16.984917   59296 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 00:41:17.453748   59296 cli_runner.go:115] Run: docker volume create custom-weave-20210817002204-111344 --label name.minikube.sigs.k8s.io=custom-weave-20210817002204-111344 --label created_by.minikube.sigs.k8s.io=true
	I0817 00:41:18.100591   59296 oci.go:102] Successfully created a docker volume custom-weave-20210817002204-111344
	I0817 00:41:18.109570   59296 cli_runner.go:115] Run: docker run --rm --name custom-weave-20210817002204-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210817002204-111344 --entrypoint /usr/bin/test -v custom-weave-20210817002204-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 00:41:21.175015   59296 cli_runner.go:168] Completed: docker run --rm --name custom-weave-20210817002204-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210817002204-111344 --entrypoint /usr/bin/test -v custom-weave-20210817002204-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (3.0652048s)
	I0817 00:41:21.175015   59296 oci.go:106] Successfully prepared a docker volume custom-weave-20210817002204-111344
	I0817 00:41:21.175236   59296 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:41:21.175236   59296 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 00:41:21.182297   59296 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210817002204-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 00:41:21.183033   59296 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W0817 00:41:21.765650   59296 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210817002204-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0817 00:41:21.765650   59296 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210817002204-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ������������������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	����   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	��������8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>�����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0817 00:41:22.018525   59296 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:62 SystemTime:2021-08-17 00:41:21.6506907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:41:22.025828   59296 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 00:41:22.834012   59296 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210817002204-111344 --name custom-weave-20210817002204-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210817002204-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210817002204-111344 --network custom-weave-20210817002204-111344 --ip 192.168.58.2 --volume custom-weave-20210817002204-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 00:41:25.337084   59296 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210817002204-111344 --name custom-weave-20210817002204-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210817002204-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210817002204-111344 --network custom-weave-20210817002204-111344 --ip 192.168.58.2 --volume custom-weave-20210817002204-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: (2.5029762s)
	I0817 00:41:25.344158   59296 cli_runner.go:115] Run: docker container inspect custom-weave-20210817002204-111344 --format={{.State.Running}}
	I0817 00:41:25.910821   59296 cli_runner.go:115] Run: docker container inspect custom-weave-20210817002204-111344 --format={{.State.Status}}
	I0817 00:41:26.441037   59296 cli_runner.go:115] Run: docker exec custom-weave-20210817002204-111344 stat /var/lib/dpkg/alternatives/iptables
	I0817 00:41:27.392434   59296 oci.go:278] the created container "custom-weave-20210817002204-111344" has a running status.
	I0817 00:41:27.392434   59296 kic.go:210] Creating ssh key for kic: C:\Users\jenkins\minikube-integration\.minikube\machines\custom-weave-20210817002204-111344\id_rsa...
	I0817 00:41:27.891324   59296 kic_runner.go:188] docker (temp): C:\Users\jenkins\minikube-integration\.minikube\machines\custom-weave-20210817002204-111344\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 00:41:28.769409   59296 cli_runner.go:115] Run: docker container inspect custom-weave-20210817002204-111344 --format={{.State.Status}}
	I0817 00:41:29.281244   59296 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 00:41:29.281244   59296 kic_runner.go:115] Args: [docker exec --privileged custom-weave-20210817002204-111344 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 00:41:29.931133   59296 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins\minikube-integration\.minikube\machines\custom-weave-20210817002204-111344\id_rsa...
	I0817 00:41:30.576416   59296 cli_runner.go:115] Run: docker container inspect custom-weave-20210817002204-111344 --format={{.State.Status}}
	I0817 00:41:31.072183   59296 machine.go:88] provisioning docker machine ...
	I0817 00:41:31.072183   59296 ubuntu.go:169] provisioning hostname "custom-weave-20210817002204-111344"
	I0817 00:41:31.078320   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:31.595601   59296 main.go:130] libmachine: Using SSH client type: native
	I0817 00:41:31.611432   59296 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55233 <nil> <nil>}
	I0817 00:41:31.611432   59296 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20210817002204-111344 && echo "custom-weave-20210817002204-111344" | sudo tee /etc/hostname
	I0817 00:41:31.985585   59296 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20210817002204-111344
	
	I0817 00:41:31.990004   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:32.523326   59296 main.go:130] libmachine: Using SSH client type: native
	I0817 00:41:32.523957   59296 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55233 <nil> <nil>}
	I0817 00:41:32.523957   59296 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20210817002204-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20210817002204-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20210817002204-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 00:41:32.830868   59296 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:41:32.830954   59296 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0817 00:41:32.830954   59296 ubuntu.go:177] setting up certificates
	I0817 00:41:32.830954   59296 provision.go:83] configureAuth start
	I0817 00:41:32.837375   59296 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210817002204-111344
	I0817 00:41:33.343784   59296 provision.go:138] copyHostCerts
	I0817 00:41:33.344325   59296 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0817 00:41:33.344325   59296 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0817 00:41:33.344926   59296 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0817 00:41:33.346329   59296 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0817 00:41:33.346845   59296 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0817 00:41:33.347246   59296 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0817 00:41:33.347789   59296 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0817 00:41:33.347789   59296 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0817 00:41:33.348820   59296 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0817 00:41:33.350074   59296 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-weave-20210817002204-111344 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20210817002204-111344]
	I0817 00:41:33.590935   59296 provision.go:172] copyRemoteCerts
	I0817 00:41:33.598179   59296 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 00:41:33.603453   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:34.081695   59296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55233 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\custom-weave-20210817002204-111344\id_rsa Username:docker}
	I0817 00:41:34.290166   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1273 bytes)
	I0817 00:41:34.400930   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 00:41:34.507013   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 00:41:34.598865   59296 provision.go:86] duration metric: configureAuth took 1.7678444s
	I0817 00:41:34.599059   59296 ubuntu.go:193] setting minikube options for container-runtime
	I0817 00:41:34.599551   59296 config.go:177] Loaded profile config "custom-weave-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:41:34.609777   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:35.086036   59296 main.go:130] libmachine: Using SSH client type: native
	I0817 00:41:35.086386   59296 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55233 <nil> <nil>}
	I0817 00:41:35.086491   59296 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0817 00:41:35.400701   59296 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0817 00:41:35.400701   59296 ubuntu.go:71] root file system type: overlay
	I0817 00:41:35.400701   59296 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0817 00:41:35.409977   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:35.928172   59296 main.go:130] libmachine: Using SSH client type: native
	I0817 00:41:35.928172   59296 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55233 <nil> <nil>}
	I0817 00:41:35.929200   59296 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0817 00:41:36.310116   59296 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0817 00:41:36.317018   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:36.836140   59296 main.go:130] libmachine: Using SSH client type: native
	I0817 00:41:36.836573   59296 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55233 <nil> <nil>}
	I0817 00:41:36.836863   59296 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:41:40.373880   59296 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-17 00:41:36.300524000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0817 00:41:40.373880   59296 machine.go:91] provisioned docker machine in 9.301343s
	I0817 00:41:40.373880   59296 client.go:171] LocalClient.Create took 26.9809676s
	I0817 00:41:40.374079   59296 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20210817002204-111344" took 26.9814344s
	I0817 00:41:40.374079   59296 start.go:267] post-start starting for "custom-weave-20210817002204-111344" (driver="docker")
	I0817 00:41:40.374079   59296 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:41:40.381368   59296 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:41:40.391254   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:40.897315   59296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55233 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\custom-weave-20210817002204-111344\id_rsa Username:docker}
	I0817 00:41:41.099464   59296 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:41:41.126790   59296 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:41:41.126920   59296 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:41:41.126920   59296 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:41:41.127106   59296 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 00:41:41.127106   59296 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:41:41.127719   59296 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:41:41.128795   59296 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:41:41.140388   59296 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:41:41.183506   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:41:41.269228   59296 start.go:270] post-start completed in 895.1145ms
	I0817 00:41:41.279012   59296 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210817002204-111344
	I0817 00:41:41.757606   59296 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\config.json ...
	I0817 00:41:41.769226   59296 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:41:41.776460   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:42.258626   59296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55233 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\custom-weave-20210817002204-111344\id_rsa Username:docker}
	I0817 00:41:42.465653   59296 start.go:129] duration metric: createHost completed in 29.0758229s
	I0817 00:41:42.465653   59296 start.go:80] releasing machines lock for "custom-weave-20210817002204-111344", held for 29.0763579s
	I0817 00:41:42.473679   59296 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210817002204-111344
	I0817 00:41:43.014168   59296 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 00:41:43.022188   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:43.029411   59296 ssh_runner.go:149] Run: systemctl --version
	I0817 00:41:43.033269   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:43.538783   59296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55233 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\custom-weave-20210817002204-111344\id_rsa Username:docker}
	I0817 00:41:43.560565   59296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55233 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\custom-weave-20210817002204-111344\id_rsa Username:docker}
	I0817 00:41:43.794923   59296 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0817 00:41:43.956227   59296 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:41:44.033025   59296 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0817 00:41:44.042636   59296 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 00:41:44.117619   59296 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 00:41:44.229266   59296 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0817 00:41:44.626377   59296 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0817 00:41:45.149017   59296 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:41:45.262871   59296 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:41:45.676115   59296 ssh_runner.go:149] Run: sudo systemctl start docker
	I0817 00:41:45.755492   59296 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:41:45.989343   59296 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:41:46.257268   59296 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	I0817 00:41:46.263988   59296 cli_runner.go:115] Run: docker exec -t custom-weave-20210817002204-111344 dig +short host.docker.internal
	I0817 00:41:47.075367   59296 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0817 00:41:47.083214   59296 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0817 00:41:47.109841   59296 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:41:47.181973   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:41:47.716974   59296 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:41:47.732686   59296 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:41:47.909356   59296 docker.go:535] Got preloaded images: 
	I0817 00:41:47.909356   59296 docker.go:541] k8s.gcr.io/kube-apiserver:v1.21.3 wasn't preloaded
	I0817 00:41:47.916973   59296 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:41:47.978171   59296 ssh_runner.go:149] Run: which lz4
	I0817 00:41:48.024760   59296 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0817 00:41:48.046570   59296 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0817 00:41:48.046909   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (504826016 bytes)
	I0817 00:42:47.091118   59296 docker.go:500] Took 59.074868 seconds to copy over tarball
	I0817 00:42:47.103276   59296 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 00:43:07.175375   59296 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (20.070855s)
	I0817 00:43:07.175375   59296 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0817 00:43:07.547497   59296 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:43:07.579384   59296 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3152 bytes)
	I0817 00:43:07.641160   59296 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:43:08.011377   59296 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0817 00:43:09.322082   59296 ssh_runner.go:189] Completed: sudo systemctl restart docker: (1.3106551s)
	I0817 00:43:09.328196   59296 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:43:09.513486   59296 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0817 00:43:09.513708   59296 cache_images.go:74] Images are preloaded, skipping loading
	I0817 00:43:09.523796   59296 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0817 00:43:09.968590   59296 cni.go:93] Creating CNI manager for "testdata\\weavenet.yaml"
	I0817 00:43:09.968590   59296 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 00:43:09.968590   59296 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210817002204-111344 NodeName:custom-weave-20210817002204-111344 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCA
File:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 00:43:09.968590   59296 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20210817002204-111344"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 00:43:09.968590   59296 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20210817002204-111344 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210817002204-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0817 00:43:09.968590   59296 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 00:43:10.008463   59296 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 00:43:10.015809   59296 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 00:43:10.047803   59296 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0817 00:43:10.089861   59296 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 00:43:10.147306   59296 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0817 00:43:10.214853   59296 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0817 00:43:10.239259   59296 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:43:10.304161   59296 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344 for IP: 192.168.58.2
	I0817 00:43:10.304682   59296 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0817 00:43:10.304682   59296 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0817 00:43:10.305276   59296 certs.go:297] generating minikube-user signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\client.key
	I0817 00:43:10.305276   59296 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\client.crt with IP's: []
	I0817 00:43:10.609150   59296 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\client.crt ...
	I0817 00:43:10.609150   59296 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\client.crt: {Name:mk84d4f1e58ae20cb5e2cf2d8a0eab4fae9160d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:10.611155   59296 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\client.key ...
	I0817 00:43:10.611155   59296 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\client.key: {Name:mkdc8487fdd07c6f862123a750bd3faf461d30db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:10.613151   59296 certs.go:297] generating minikube signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.key.cee25041
	I0817 00:43:10.613151   59296 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 00:43:10.839297   59296 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.crt.cee25041 ...
	I0817 00:43:10.840296   59296 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.crt.cee25041: {Name:mk456951884973acbf92b20c44ee7e7ffc26aee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:10.841298   59296 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.key.cee25041 ...
	I0817 00:43:10.841298   59296 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.key.cee25041: {Name:mk1b30baaacca8cb4e8a6da0c65e67b2063ce9fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:10.843299   59296 certs.go:308] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.crt.cee25041 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.crt
	I0817 00:43:10.848286   59296 certs.go:312] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.key.cee25041 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.key
	I0817 00:43:10.850303   59296 certs.go:297] generating aggregator signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\proxy-client.key
	I0817 00:43:10.850303   59296 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\proxy-client.crt with IP's: []
	I0817 00:43:11.104885   59296 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\proxy-client.crt ...
	I0817 00:43:11.104885   59296 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\proxy-client.crt: {Name:mk2c0fafded204aedf5ec136c86d8fbb6f98997a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:11.106881   59296 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\proxy-client.key ...
	I0817 00:43:11.106881   59296 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\proxy-client.key: {Name:mk7066a90e2d48f1a566aa86ed2d56adaa51f6ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:43:11.114919   59296 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem (1338 bytes)
	W0817 00:43:11.115919   59296 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344_empty.pem, impossibly tiny 0 bytes
	I0817 00:43:11.115919   59296 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0817 00:43:11.115919   59296 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0817 00:43:11.115919   59296 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0817 00:43:11.115919   59296 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0817 00:43:11.116941   59296 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem (1708 bytes)
	I0817 00:43:11.118920   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 00:43:11.264583   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 00:43:11.402704   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 00:43:11.481794   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\custom-weave-20210817002204-111344\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 00:43:11.588502   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 00:43:11.707490   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 00:43:11.789359   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 00:43:11.865021   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 00:43:11.965449   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /usr/share/ca-certificates/1113442.pem (1708 bytes)
	I0817 00:43:12.046592   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 00:43:12.134547   59296 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem --> /usr/share/ca-certificates/111344.pem (1338 bytes)
	I0817 00:43:12.234584   59296 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 00:43:12.298507   59296 ssh_runner.go:149] Run: openssl version
	I0817 00:43:12.326207   59296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1113442.pem && ln -fs /usr/share/ca-certificates/1113442.pem /etc/ssl/certs/1113442.pem"
	I0817 00:43:12.378416   59296 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1113442.pem
	I0817 00:43:12.405775   59296 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 23:23 /usr/share/ca-certificates/1113442.pem
	I0817 00:43:12.414627   59296 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1113442.pem
	I0817 00:43:12.442417   59296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1113442.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 00:43:12.480962   59296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 00:43:12.520343   59296 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:43:12.535438   59296 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:43:12.543442   59296 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:43:12.586915   59296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 00:43:12.643100   59296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111344.pem && ln -fs /usr/share/ca-certificates/111344.pem /etc/ssl/certs/111344.pem"
	I0817 00:43:12.679422   59296 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/111344.pem
	I0817 00:43:12.698264   59296 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 23:23 /usr/share/ca-certificates/111344.pem
	I0817 00:43:12.706855   59296 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111344.pem
	I0817 00:43:12.738851   59296 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/111344.pem /etc/ssl/certs/51391683.0"
	I0817 00:43:12.792994   59296 kubeadm.go:390] StartCluster: {Name:custom-weave-20210817002204-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210817002204-111344 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:43:12.799440   59296 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:43:12.948560   59296 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 00:43:13.002671   59296 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 00:43:13.036396   59296 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 00:43:13.043569   59296 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 00:43:13.084580   59296 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 00:43:13.084790   59296 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 00:44:08.461550   59296 out.go:204]   - Generating certificates and keys ...
	I0817 00:44:08.473048   59296 out.go:204]   - Booting up control plane ...
	I0817 00:44:08.476537   59296 out.go:204]   - Configuring RBAC rules ...
	I0817 00:44:08.481690   59296 cni.go:93] Creating CNI manager for "testdata\\weavenet.yaml"
	I0817 00:44:08.483351   59296 out.go:177] * Configuring testdata\weavenet.yaml (Container Networking Interface) ...
	I0817 00:44:08.508643   59296 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 00:44:08.524867   59296 ssh_runner.go:149] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0817 00:44:08.583159   59296 ssh_runner.go:306] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0817 00:44:08.583298   59296 ssh_runner.go:316] scp testdata\weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0817 00:44:08.981372   59296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 00:44:15.360330   59296 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (6.3787156s)
	I0817 00:44:15.360560   59296 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 00:44:15.372018   59296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:44:15.380263   59296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=custom-weave-20210817002204-111344 minikube.k8s.io/updated_at=2021_08_17T00_44_15_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:44:15.925628   59296 ops.go:34] apiserver oom_adj: -16
	I0817 00:44:17.400189   59296 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=custom-weave-20210817002204-111344 minikube.k8s.io/updated_at=2021_08_17T00_44_15_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (2.019839s)
	I0817 00:44:17.401154   59296 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (2.0277516s)
	I0817 00:44:17.412034   59296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:44:19.476546   59296 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.0644183s)
	I0817 00:44:19.987648   59296 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:44:21.793118   59296 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.8052833s)
	I0817 00:44:21.793253   59296 kubeadm.go:985] duration metric: took 6.4324237s to wait for elevateKubeSystemPrivileges.
	I0817 00:44:21.793253   59296 kubeadm.go:392] StartCluster complete in 1m8.9976117s
	I0817 00:44:21.793253   59296 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:44:21.794034   59296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:44:21.799166   59296 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:44:22.658254   59296 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20210817002204-111344" rescaled to 1
	I0817 00:44:22.658501   59296 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 00:44:22.658501   59296 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 00:44:22.658501   59296 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 00:44:22.658773   59296 addons.go:59] Setting storage-provisioner=true in profile "custom-weave-20210817002204-111344"
	I0817 00:44:22.660898   59296 out.go:177] * Verifying Kubernetes components...
	I0817 00:44:22.659112   59296 addons.go:135] Setting addon storage-provisioner=true in "custom-weave-20210817002204-111344"
	I0817 00:44:22.659348   59296 addons.go:59] Setting default-storageclass=true in profile "custom-weave-20210817002204-111344"
	I0817 00:44:22.660345   59296 config.go:177] Loaded profile config "custom-weave-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	W0817 00:44:22.661153   59296 addons.go:147] addon storage-provisioner should already be in state true
	I0817 00:44:22.661384   59296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20210817002204-111344"
	I0817 00:44:22.661613   59296 host.go:66] Checking if "custom-weave-20210817002204-111344" exists ...
	I0817 00:44:22.671570   59296 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 00:44:22.681099   59296 cli_runner.go:115] Run: docker container inspect custom-weave-20210817002204-111344 --format={{.State.Status}}
	I0817 00:44:22.688113   59296 cli_runner.go:115] Run: docker container inspect custom-weave-20210817002204-111344 --format={{.State.Status}}
	I0817 00:44:23.321444   59296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:44:23.323830   59296 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:44:23.324922   59296 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 00:44:23.332384   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:44:23.473242   59296 addons.go:135] Setting addon default-storageclass=true in "custom-weave-20210817002204-111344"
	W0817 00:44:23.473242   59296 addons.go:147] addon default-storageclass should already be in state true
	I0817 00:44:23.473242   59296 host.go:66] Checking if "custom-weave-20210817002204-111344" exists ...
	I0817 00:44:23.485639   59296 cli_runner.go:115] Run: docker container inspect custom-weave-20210817002204-111344 --format={{.State.Status}}
	I0817 00:44:23.896529   59296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55233 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\custom-weave-20210817002204-111344\id_rsa Username:docker}
	I0817 00:44:24.033546   59296 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 00:44:24.033902   59296 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 00:44:24.054520   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:44:24.597380   59296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55233 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\custom-weave-20210817002204-111344\id_rsa Username:docker}
	I0817 00:44:27.035181   59296 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (4.3765137s)
	I0817 00:44:27.035181   59296 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (4.3634455s)
	I0817 00:44:27.036169   59296 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 00:44:27.043957   59296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20210817002204-111344
	I0817 00:44:27.197526   59296 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 00:44:27.644094   59296 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20210817002204-111344" to be "Ready" ...
	I0817 00:44:27.703279   59296 node_ready.go:49] node "custom-weave-20210817002204-111344" has status "Ready":"True"
	I0817 00:44:27.703279   59296 node_ready.go:38] duration metric: took 59.1823ms waiting for node "custom-weave-20210817002204-111344" to be "Ready" ...
	I0817 00:44:27.704280   59296 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 00:44:27.802201   59296 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-w8hxg" in "kube-system" namespace to be "Ready" ...
	I0817 00:44:28.193533   59296 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:44:30.405537   59296 pod_ready.go:102] pod "coredns-558bd4d5db-w8hxg" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:32.821187   59296 pod_ready.go:102] pod "coredns-558bd4d5db-w8hxg" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:35.257760   59296 pod_ready.go:102] pod "coredns-558bd4d5db-w8hxg" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:35.743526   59296 pod_ready.go:97] error getting pod "coredns-558bd4d5db-w8hxg" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-w8hxg" not found
	I0817 00:44:35.743526   59296 pod_ready.go:81] duration metric: took 7.9408702s waiting for pod "coredns-558bd4d5db-w8hxg" in "kube-system" namespace to be "Ready" ...
	E0817 00:44:35.743526   59296 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-w8hxg" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-w8hxg" not found
	I0817 00:44:35.743526   59296 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace to be "Ready" ...
	I0817 00:44:37.868095   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:40.370317   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:42.873672   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:45.384655   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:47.911695   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:50.383272   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:51.616611   59296 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (24.5795079s)
	I0817 00:44:51.616810   59296 start.go:728] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0817 00:44:51.616611   59296 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (24.417939s)
	I0817 00:44:51.616810   59296 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (23.4223869s)
	I0817 00:44:51.618906   59296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 00:44:51.619167   59296 addons.go:344] enableAddons completed in 28.9595656s
	I0817 00:44:52.851321   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:54.877610   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:57.349688   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:44:59.631399   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:01.844101   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:04.349519   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:06.882456   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:08.886897   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:11.350442   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:13.381601   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:15.853460   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:18.350142   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:20.847211   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:23.348832   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:25.353473   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:27.931320   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:30.458438   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:32.859690   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:35.354301   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:37.356117   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:39.841246   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:41.849295   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:43.862305   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:46.341479   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:48.849648   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:50.917071   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:53.374128   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:55.837502   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:58.342115   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:00.835758   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:03.345507   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:05.880341   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:07.904858   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:10.350111   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:12.362842   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:14.836191   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:16.850275   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:19.331549   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:21.347652   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:23.357877   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:25.841847   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:27.862155   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:30.369003   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:32.387168   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:34.405921   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:36.835838   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:38.884593   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:41.388882   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:43.509723   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:45.853324   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:47.854527   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:49.873540   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:52.420534   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:54.877671   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:57.352071   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:59.367541   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:01.852774   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:03.864430   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:06.340126   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:08.371110   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:10.853228   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:13.438052   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:15.863766   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:18.371799   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:20.454429   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:22.839333   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:24.854225   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:27.335737   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:29.349653   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:31.363950   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:33.853927   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:36.362096   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:38.860834   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:41.663433   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:44.157742   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:46.356831   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:48.835606   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:50.849571   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:52.853315   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:54.870286   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:57.348896   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:59.350763   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:01.843791   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:03.851414   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:05.872021   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:08.359245   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:10.861456   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:12.868828   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:14.953280   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:17.357833   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:19.371231   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:21.913824   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:24.360146   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:26.420627   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:28.847783   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:30.870869   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:33.356619   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:35.442071   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:35.870177   59296 pod_ready.go:81] duration metric: took 4m0.1175262s waiting for pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace to be "Ready" ...
	E0817 00:48:35.870177   59296 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0817 00:48:35.870177   59296 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:35.896516   59296 pod_ready.go:92] pod "etcd-custom-weave-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:48:35.896516   59296 pod_ready.go:81] duration metric: took 26.2247ms waiting for pod "etcd-custom-weave-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:35.896516   59296 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:35.919013   59296 pod_ready.go:92] pod "kube-apiserver-custom-weave-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:48:35.919013   59296 pod_ready.go:81] duration metric: took 22.4966ms waiting for pod "kube-apiserver-custom-weave-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:35.919013   59296 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:35.953581   59296 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:48:35.953703   59296 pod_ready.go:81] duration metric: took 34.6885ms waiting for pod "kube-controller-manager-custom-weave-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:35.953781   59296 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-xhs8r" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:36.228252   59296 pod_ready.go:92] pod "kube-proxy-xhs8r" in "kube-system" namespace has status "Ready":"True"
	I0817 00:48:36.228252   59296 pod_ready.go:81] duration metric: took 274.4611ms waiting for pod "kube-proxy-xhs8r" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:36.228444   59296 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:36.632778   59296 pod_ready.go:92] pod "kube-scheduler-custom-weave-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:48:36.632931   59296 pod_ready.go:81] duration metric: took 404.471ms waiting for pod "kube-scheduler-custom-weave-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:36.632931   59296 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-ptvvr" in "kube-system" namespace to be "Ready" ...
	I0817 00:48:39.113301   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:41.568648   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:43.585779   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:45.608479   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:48.088996   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:50.112134   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:52.675637   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:55.116434   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:48:57.693849   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:00.109889   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:02.162862   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:04.693401   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:07.080563   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:09.122542   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:11.590786   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:13.624109   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:16.079412   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:18.602245   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:21.069099   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:23.079386   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:25.113145   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:27.605688   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:30.107441   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:32.129629   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:34.563049   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:36.912414   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:39.122593   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:41.576007   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:43.990029   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:46.083415   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:48.597393   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:52.041967   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:54.094550   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:56.573054   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:49:58.589455   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:00.590379   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:02.680880   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:05.134814   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:07.652938   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:10.083286   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:12.084712   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:14.561154   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:16.572972   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:19.092391   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:21.097185   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:23.599338   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:26.083097   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:28.156776   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:30.579409   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:32.582497   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:35.063972   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:37.067852   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:39.093166   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:41.117589   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:43.593527   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:46.093432   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:48.679006   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:51.078298   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:53.592549   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:56.077749   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:50:58.081693   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:00.572079   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:02.587484   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:05.087995   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:07.606866   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:10.077302   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:12.086187   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:14.583567   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:17.088127   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:19.480699   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:21.591594   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:23.615769   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:26.085811   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:28.088293   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:30.101911   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:32.584452   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:35.092816   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:37.576527   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:39.585656   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:41.651539   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:44.090744   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:46.577785   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:48.616387   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:51.097590   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:53.581672   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:56.088092   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:51:58.585331   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:01.078813   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:03.101597   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:05.567910   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:07.569598   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:09.574257   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:11.590582   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:14.093301   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:16.588191   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:19.081165   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:21.578685   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:24.078329   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:26.579418   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:28.590116   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:31.080155   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:33.583404   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:36.088654   59296 pod_ready.go:102] pod "weave-net-ptvvr" in "kube-system" namespace has status "Ready":"False"
	I0817 00:52:37.104202   59296 pod_ready.go:81] duration metric: took 4m0.4620844s waiting for pod "weave-net-ptvvr" in "kube-system" namespace to be "Ready" ...
	E0817 00:52:37.104202   59296 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0817 00:52:37.104202   59296 pod_ready.go:38] duration metric: took 8m9.3812757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 00:52:37.104202   59296 api_server.go:50] waiting for apiserver process to appear ...
	I0817 00:52:37.110563   59296 out.go:177] 
	W0817 00:52:37.111745   59296 out.go:242] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0817 00:52:37.131117   59296 out.go:242] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0817 00:52:37.131712   59296 out.go:242] * Related issues:
	* Related issues:
	W0817 00:52:37.137324   59296 out.go:242]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0817 00:52:37.137512   59296 out.go:242]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0817 00:52:37.139348   59296 out.go:177] 

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:100: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (688.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (63.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20210817003608-111344 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-20210817003608-111344 --alsologtostderr -v=1: exit status 80 (6.7246096s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20210817003608-111344 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 00:47:09.745294   64148 out.go:298] Setting OutFile to fd 3600 ...
	I0817 00:47:09.762671   64148 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:47:09.762823   64148 out.go:311] Setting ErrFile to fd 4088...
	I0817 00:47:09.762823   64148 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:47:09.790850   64148 out.go:305] Setting JSON to false
	I0817 00:47:09.790850   64148 mustload.go:65] Loading cluster: newest-cni-20210817003608-111344
	I0817 00:47:09.791850   64148 config.go:177] Loaded profile config "newest-cni-20210817003608-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.0-rc.0
	I0817 00:47:09.803802   64148 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:47:11.632096   64148 cli_runner.go:168] Completed: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}: (1.8282247s)
	I0817 00:47:11.632096   64148 host.go:66] Checking if "newest-cni-20210817003608-111344" exists ...
	I0817 00:47:11.638711   64148 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:47:12.178217   64148 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:C:\Users\jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-
plugin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-20210817003608-111344 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0817 00:47:12.180139   64148 out.go:177] * Pausing node newest-cni-20210817003608-111344 ... 
	I0817 00:47:12.180139   64148 host.go:66] Checking if "newest-cni-20210817003608-111344" exists ...
	I0817 00:47:12.192134   64148 ssh_runner.go:149] Run: systemctl --version
	I0817 00:47:12.198650   64148 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:47:12.681850   64148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:47:13.292085   64148 ssh_runner.go:189] Completed: systemctl --version: (1.0999093s)
	I0817 00:47:13.301810   64148 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 00:47:13.392835   64148 pause.go:50] kubelet running: true
	I0817 00:47:13.401854   64148 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0817 00:47:15.176904   64148 ssh_runner.go:189] Completed: sudo systemctl disable --now kubelet: (1.7749824s)
	I0817 00:47:15.180841   64148 out.go:177] 
	W0817 00:47:15.181167   64148 out.go:242] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0817 00:47:15.181167   64148 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0817 00:47:16.063405   64148 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                              │
	│    * If the above advice does not help, please let us know:                                                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                │
	│                                                                                                              │
	│    * Please attach the following file to the GitHub issue:                                                   │
	│    * - C:\Users\jenkins\AppData\Local\Temp\minikube_pause_f7b66d8b6bb1dd36163d41219e23ab4de3e469bc_11.log    │
	│                                                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                              │
	│    * If the above advice does not help, please let us know:                                                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                │
	│                                                                                                              │
	│    * Please attach the following file to the GitHub issue:                                                   │
	│    * - C:\Users\jenkins\AppData\Local\Temp\minikube_pause_f7b66d8b6bb1dd36163d41219e23ab4de3e469bc_11.log    │
	│                                                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 00:47:16.067626   64148 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-windows-amd64.exe pause -p newest-cni-20210817003608-111344 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210817003608-111344
helpers_test.go:236: (dbg) docker inspect newest-cni-20210817003608-111344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb",
	        "Created": "2021-08-17T00:40:30.816513Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274995,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T00:45:30.58461Z",
	            "FinishedAt": "2021-08-17T00:45:17.3236145Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb/hostname",
	        "HostsPath": "/var/lib/docker/containers/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb/hosts",
	        "LogPath": "/var/lib/docker/containers/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb-json.log",
	        "Name": "/newest-cni-20210817003608-111344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-20210817003608-111344:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210817003608-111344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cd01e88b5bfaeb04bd830f98aea8fbd63038ea29218406dc9dfb87a019607b9e-init/diff:/var/lib/docker/overlay2/e167e57d4b442602b2435f5ffd2147b1da53de34df49d96ce69565867fcf3850/diff:/var/lib/docker/overlay2/dbfef15a73962254d5bcc2c91a409021fc3573c3135096621d707c6f4feaac7d/diff:/var/lib/docker/overlay2/7fc44848dc580276135d9db2b62ce047cfba1909de5e91acbe8c1a5fc8fb3649/diff:/var/lib/docker/overlay2/493996ff2d6a75ef70db2749dded6936397fe536c32e28dda979b8af93e19f13/diff:/var/lib/docker/overlay2/b862553905dec6f42a41351a012fdce386251d97160f74f6b1feb3b455e1f53a/diff:/var/lib/docker/overlay2/517a8b2830d9e81ff950c8305063a6681219abbb7b22f3a87587fa819a0728ed/diff:/var/lib/docker/overlay2/f2b268080cfd9bbb64731ea6b7cb2ec64077e6c2701c2ab6e8b358a541056c5d/diff:/var/lib/docker/overlay2/ee5e612696333c681900cad605a1f678e9114e9c7ecf70717fad21aea1e52992/diff:/var/lib/docker/overlay2/6f44289af0b09a02645c237aabeff61487c57040b9531c0f7bd97517308bfd57/diff:/var/lib/docker/overlay2/f98f67
21a411bacf9d310d4d4405fbd528fa90d60af5ffabda9d55cef9ef3033/diff:/var/lib/docker/overlay2/8bc2e0f6b7c2aeccc6a944f316dbac5672f8685cc5dd5d3c2fc4bd370db4949f/diff:/var/lib/docker/overlay2/ef9e793c1e243004ff088f210369994837eb19a8abd21cf93f75257155445f16/diff:/var/lib/docker/overlay2/48fa7f37fc37f8220a31f4294bc800ef7a33c53c10bdc23d7dc68f27cfe4e535/diff:/var/lib/docker/overlay2/54bc5e0e0c32fdc66ce3eeb345721201a63a0c878d4665607246cd4aa5af61e5/diff:/var/lib/docker/overlay2/398c3fc63254fcc564086ced0eb7211f2d474f8bbdcd43ee27fd609e767c44a6/diff:/var/lib/docker/overlay2/796acb5b93384da004a8065a332cbb07c952569bdd7bb5e551b218e4c5c61f73/diff:/var/lib/docker/overlay2/d90baef87ad95bdfb14a2f35e4cb62336e18c21eb934266f43bfbe017252b857/diff:/var/lib/docker/overlay2/c16752decc8ef06fce4eebdf4ff4725414f3aa80cccd7b3ffdc325095930c0b4/diff:/var/lib/docker/overlay2/a679084eec181b0e1408e573d1ac08c47af1fd8266eb5884bf1a38d5ba0ddbbc/diff:/var/lib/docker/overlay2/15becb79b0d40211562ae33ddc5ec776276b9ae42c8a9f4645dcc6442b36f771/diff:/var/lib/d
ocker/overlay2/068a9a5dce1094eb72788237bd9cda4c76345774d5e647f0af81302a75861f4a/diff:/var/lib/docker/overlay2/74b9e9d807e09734ee96c76bc67adc56c9e3286b39315f89f6747c8c917ad2e5/diff:/var/lib/docker/overlay2/75de8e4895a0b4efe563705c06184db384b5c40154856b9bca2106a8d59fc151/diff:/var/lib/docker/overlay2/cbca3c40b21fee2ef276744168492f17203934aca8de4b459edae2fa55b6bb02/diff:/var/lib/docker/overlay2/584d28a6308bb998bd89d7d92c45b57b9dd66de472d166972d2f5195afd9dd44/diff:/var/lib/docker/overlay2/9c722118749c036eb2d00ba5a6925c5f32b121d64974c99e2de552b26a8bb7cd/diff:/var/lib/docker/overlay2/24908c792743f57c182587c66263f074ed86ae7c5812c631dea82d8ec6650e81/diff:/var/lib/docker/overlay2/9a8de59bfb816b3fc2f0fd522ef966196534483b5e87aafd180dd8b07e9c3582/diff:/var/lib/docker/overlay2/df46d170084213da519dea7e0f402d51272dc10df4d7cd7f37c528c411ac7000/diff:/var/lib/docker/overlay2/36b86a6f515e5882426e598755bb77d43cc340fd20798dfd0a810cd2ab96eeb6/diff:/var/lib/docker/overlay2/b54ac02f70047359cd143a32f862d18498cb556877ccfd252defb9d17fc
9d9f5/diff:/var/lib/docker/overlay2/971b77d080920997e1d0d0936f312a9a322ccd6ab9920c83a8eb5d14b93c3849/diff:/var/lib/docker/overlay2/5b5c21ae360c7e0738c0048bc3fe8d7d3cc0640d266660121f3968f675f42063/diff:/var/lib/docker/overlay2/e07bf2561a99ba47435b8f84b267268e02e9e4ff47832bd5054ee28bb1ca5001/diff:/var/lib/docker/overlay2/0c560be48f01814af21ec54fc79ea5e8db28f05e967a17b331be28ad61c75483/diff:/var/lib/docker/overlay2/27930667f3fd0fd38c13a39c0590c03a2c3b3ba04f0a3c946167be6a40f50c46/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd01e88b5bfaeb04bd830f98aea8fbd63038ea29218406dc9dfb87a019607b9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd01e88b5bfaeb04bd830f98aea8fbd63038ea29218406dc9dfb87a019607b9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd01e88b5bfaeb04bd830f98aea8fbd63038ea29218406dc9dfb87a019607b9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210817003608-111344",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210817003608-111344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210817003608-111344",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210817003608-111344",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210817003608-111344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf4be4471fd151b3d8822ccf0bfaf4290188218d64d0cb47b6ceb41287f90877",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55238"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55237"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55234"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55236"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55235"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cf4be4471fd1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210817003608-111344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "143a9dd09ce8",
	                        "newest-cni-20210817003608-111344"
	                    ],
	                    "NetworkID": "2e2979479cd3df0ec63a7a5d29fed62692100e75759df90c874233a471610ab5",
	                    "EndpointID": "f3f0d26563739858c0c00e6958045152e637f1dc1d741ffa8f1a6d707b88e1f9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344: (5.7423051s)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-20210817003608-111344 logs -n 25
E0817 00:47:22.799143  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-20210817003608-111344 logs -n 25: (16.3182544s)
helpers_test.go:253: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | embed-certs-20210817002328-111344                | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:07 GMT | Tue, 17 Aug 2021 00:36:27 GMT |
	|         | embed-certs-20210817002328-111344                          |                                                  |                         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210817002328-111344                | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:28 GMT | Tue, 17 Aug 2021 00:36:33 GMT |
	|         | embed-certs-20210817002328-111344                          |                                                  |                         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20210817002237-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:32 GMT | Tue, 17 Aug 2021 00:36:36 GMT |
	|         | no-preload-20210817002237-111344                           |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |                         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210817002237-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:46 GMT | Tue, 17 Aug 2021 00:37:03 GMT |
	|         | no-preload-20210817002237-111344                           |                                                  |                         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210817002237-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:37:04 GMT | Tue, 17 Aug 2021 00:37:09 GMT |
	|         | no-preload-20210817002237-111344                           |                                                  |                         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:31:07 GMT | Tue, 17 Aug 2021 00:38:22 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                  |                         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                  |                         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |                         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:38:44 GMT | Tue, 17 Aug 2021 00:38:48 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |                         |         |                               |                               |
	| pause   | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:38:49 GMT | Tue, 17 Aug 2021 00:38:53 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |                         |         |                               |                               |
	| unpause | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:39:02 GMT | Tue, 17 Aug 2021 00:39:07 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |                         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:39:16 GMT | Tue, 17 Aug 2021 00:39:34 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:39:35 GMT | Tue, 17 Aug 2021 00:39:40 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	| start   | -p auto-20210817002157-111344                              | auto-20210817002157-111344                       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:34 GMT | Tue, 17 Aug 2021 00:39:52 GMT |
	|         | --memory=2048                                              |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                  |                         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                  |                         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |                         |         |                               |                               |
	| ssh     | -p auto-20210817002157-111344                              | auto-20210817002157-111344                       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:39:52 GMT | Tue, 17 Aug 2021 00:39:55 GMT |
	|         | pgrep -a kubelet                                           |                                                  |                         |         |                               |                               |
	| start   | -p false-20210817002204-111344                             | false-20210817002204-111344                      | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:37:10 GMT | Tue, 17 Aug 2021 00:40:12 GMT |
	|         | --memory=2048                                              |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr --wait=true                              |                                                  |                         |         |                               |                               |
	|         | --wait-timeout=5m --cni=false                              |                                                  |                         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |                         |         |                               |                               |
	| ssh     | -p false-20210817002204-111344                             | false-20210817002204-111344                      | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:40:13 GMT | Tue, 17 Aug 2021 00:40:17 GMT |
	|         | pgrep -a kubelet                                           |                                                  |                         |         |                               |                               |
	| delete  | -p auto-20210817002157-111344                              | auto-20210817002157-111344                       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:40:19 GMT | Tue, 17 Aug 2021 00:40:41 GMT |
	| delete  | -p false-20210817002204-111344                             | false-20210817002204-111344                      | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:40:49 GMT | Tue, 17 Aug 2021 00:41:08 GMT |
	| start   | -p newest-cni-20210817003608-111344 --memory=2200          | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:08 GMT | Tue, 17 Aug 2021 00:44:48 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |                         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |                         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |                         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |                         |         |                               |                               |
	|         | --driver=docker --kubernetes-version=v1.22.0-rc.0          |                                                  |                         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:44:49 GMT | Tue, 17 Aug 2021 00:44:59 GMT |
	|         | newest-cni-20210817003608-111344                           |                                                  |                         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |                         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |                         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:44:59 GMT | Tue, 17 Aug 2021 00:45:18 GMT |
	|         | newest-cni-20210817003608-111344                           |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |                         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:45:21 GMT | Tue, 17 Aug 2021 00:45:23 GMT |
	|         | newest-cni-20210817003608-111344                           |                                                  |                         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |                         |         |                               |                               |
	| start   | -p                                                         | cilium-20210817002204-111344                     | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:39:41 GMT | Tue, 17 Aug 2021 00:46:54 GMT |
	|         | cilium-20210817002204-111344                               |                                                  |                         |         |                               |                               |
	|         | --memory=2048                                              |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr --wait=true                              |                                                  |                         |         |                               |                               |
	|         | --wait-timeout=5m --cni=cilium                             |                                                  |                         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |                         |         |                               |                               |
	| start   | -p newest-cni-20210817003608-111344 --memory=2200          | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:45:23 GMT | Tue, 17 Aug 2021 00:46:58 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |                         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |                         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |                         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |                         |         |                               |                               |
	|         | --driver=docker --kubernetes-version=v1.22.0-rc.0          |                                                  |                         |         |                               |                               |
	| ssh     | -p                                                         | cilium-20210817002204-111344                     | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:47:00 GMT | Tue, 17 Aug 2021 00:47:04 GMT |
	|         | cilium-20210817002204-111344                               |                                                  |                         |         |                               |                               |
	|         | pgrep -a kubelet                                           |                                                  |                         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:47:05 GMT | Tue, 17 Aug 2021 00:47:09 GMT |
	|         | newest-cni-20210817003608-111344                           |                                                  |                         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |                         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 00:45:23
	Running on machine: windows-server-2
	Binary: Built with gc go1.16.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 00:45:23.620145   32084 out.go:298] Setting OutFile to fd 4016 ...
	I0817 00:45:23.622181   32084 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:45:23.622307   32084 out.go:311] Setting ErrFile to fd 784...
	I0817 00:45:23.622307   32084 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:45:23.652702   32084 out.go:305] Setting JSON to false
	I0817 00:45:23.656729   32084 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8369170,"bootTime":1620791953,"procs":146,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0817 00:45:23.656920   32084 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0817 00:45:23.659581   32084 out.go:177] * [newest-cni-20210817003608-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0817 00:45:23.660164   32084 notify.go:169] Checking for updates...
	I0817 00:45:23.661916   32084 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:45:23.663473   32084 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0817 00:45:20.847211   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:23.348832   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:22.956548   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:24.995189   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:21.498475   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:24.005799   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:23.665152   32084 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 00:45:23.666019   32084 config.go:177] Loaded profile config "newest-cni-20210817003608-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.0-rc.0
	I0817 00:45:23.672867   32084 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 00:45:25.468346   32084 docker.go:132] docker version: linux-20.10.2
	I0817 00:45:25.474301   32084 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:45:26.395019   32084 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:45:25.964029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] C
lientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:45:26.397542   32084 out.go:177] * Using the docker driver based on existing profile
	I0817 00:45:26.397742   32084 start.go:278] selected driver: docker
	I0817 00:45:26.397742   32084 start.go:751] validating driver "docker" against &{Name:newest-cni-20210817003608-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817003608-111344 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa
:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:45:26.397997   32084 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 00:45:26.507000   32084 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:45:27.309399   32084 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:45:26.9340576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:45:27.310113   32084 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0817 00:45:27.310311   32084 cni.go:93] Creating CNI manager for ""
	I0817 00:45:27.310311   32084 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:45:27.310311   32084 start_flags.go:277] config:
	{Name:newest-cni-20210817003608-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817003608-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTime
out:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:45:27.314944   32084 out.go:177] * Starting control plane node newest-cni-20210817003608-111344 in cluster newest-cni-20210817003608-111344
	I0817 00:45:27.315223   32084 cache.go:117] Beginning downloading kic base image for docker with docker
	I0817 00:45:27.317051   32084 out.go:177] * Pulling base image ...
	I0817 00:45:27.317260   32084 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0817 00:45:27.317260   32084 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 00:45:27.317729   32084 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0817 00:45:27.317729   32084 cache.go:56] Caching tarball of preloaded images
	I0817 00:45:27.318560   32084 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0817 00:45:27.318872   32084 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on docker
	I0817 00:45:27.319099   32084 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\config.json ...
	I0817 00:45:27.842326   32084 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 00:45:27.842326   32084 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 00:45:27.842326   32084 cache.go:205] Successfully downloaded all kic artifacts
	I0817 00:45:27.843068   32084 start.go:313] acquiring machines lock for newest-cni-20210817003608-111344: {Name:mk3f16f02a99d1b37ee77f4ca210722696dca362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:45:27.843390   32084 start.go:317] acquired machines lock for "newest-cni-20210817003608-111344" in 321.7µs
	I0817 00:45:27.843677   32084 start.go:93] Skipping create...Using existing machine configuration
	I0817 00:45:27.843677   32084 fix.go:55] fixHost starting: 
	I0817 00:45:27.865162   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:45:28.335499   32084 fix.go:108] recreateIfNeeded on newest-cni-20210817003608-111344: state=Stopped err=<nil>
	W0817 00:45:28.335499   32084 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 00:45:28.341327   32084 out.go:177] * Restarting existing docker container for "newest-cni-20210817003608-111344" ...
	I0817 00:45:28.345077   32084 cli_runner.go:115] Run: docker start newest-cni-20210817003608-111344
	I0817 00:45:25.353473   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:27.931320   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:27.457012   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:29.608062   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:26.497938   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:28.525322   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:31.001669   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:30.677885   32084 cli_runner.go:168] Completed: docker start newest-cni-20210817003608-111344: (2.3325824s)
	I0817 00:45:30.687699   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:45:31.195879   32084 kic.go:420] container "newest-cni-20210817003608-111344" state is running.
	I0817 00:45:31.197407   32084 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817003608-111344
	I0817 00:45:31.724287   32084 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\config.json ...
	I0817 00:45:31.727922   32084 machine.go:88] provisioning docker machine ...
	I0817 00:45:31.728121   32084 ubuntu.go:169] provisioning hostname "newest-cni-20210817003608-111344"
	I0817 00:45:31.736535   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:32.270727   32084 main.go:130] libmachine: Using SSH client type: native
	I0817 00:45:32.271286   32084 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55238 <nil> <nil>}
	I0817 00:45:32.271286   32084 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210817003608-111344 && echo "newest-cni-20210817003608-111344" | sudo tee /etc/hostname
	I0817 00:45:32.277349   32084 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0817 00:45:30.458438   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:32.859690   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:32.013546   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:34.474976   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:33.011273   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:35.273595   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:35.746881   32084 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210817003608-111344
	
	I0817 00:45:35.750222   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:36.259781   32084 main.go:130] libmachine: Using SSH client type: native
	I0817 00:45:36.260471   32084 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55238 <nil> <nil>}
	I0817 00:45:36.260471   32084 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210817003608-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210817003608-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210817003608-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 00:45:36.576223   32084 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:45:36.576223   32084 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0817 00:45:36.576223   32084 ubuntu.go:177] setting up certificates
	I0817 00:45:36.576430   32084 provision.go:83] configureAuth start
	I0817 00:45:36.588708   32084 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817003608-111344
	I0817 00:45:37.115437   32084 provision.go:138] copyHostCerts
	I0817 00:45:37.116339   32084 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0817 00:45:37.116339   32084 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0817 00:45:37.116896   32084 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0817 00:45:37.118372   32084 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0817 00:45:37.118691   32084 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0817 00:45:37.119048   32084 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0817 00:45:37.120266   32084 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0817 00:45:37.120266   32084 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0817 00:45:37.120505   32084 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0817 00:45:37.121936   32084 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-20210817003608-111344 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210817003608-111344]
	I0817 00:45:37.429759   32084 provision.go:172] copyRemoteCerts
	I0817 00:45:37.436769   32084 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 00:45:37.441755   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:37.945352   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:45:38.160219   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	I0817 00:45:38.254267   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 00:45:38.372888   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 00:45:35.354301   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:37.356117   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:36.986968   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:39.462827   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:37.482575   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:39.492424   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:38.477645   32084 provision.go:86] duration metric: configureAuth took 1.9011422s
	I0817 00:45:38.477645   32084 ubuntu.go:193] setting minikube options for container-runtime
	I0817 00:45:38.478106   32084 config.go:177] Loaded profile config "newest-cni-20210817003608-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.0-rc.0
	I0817 00:45:38.484472   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:39.029909   32084 main.go:130] libmachine: Using SSH client type: native
	I0817 00:45:39.029909   32084 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55238 <nil> <nil>}
	I0817 00:45:39.029909   32084 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0817 00:45:39.370625   32084 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0817 00:45:39.370824   32084 ubuntu.go:71] root file system type: overlay
	I0817 00:45:39.371094   32084 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0817 00:45:39.373185   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:39.926267   32084 main.go:130] libmachine: Using SSH client type: native
	I0817 00:45:39.926874   32084 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55238 <nil> <nil>}
	I0817 00:45:39.927153   32084 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0817 00:45:40.291708   32084 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0817 00:45:40.293212   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:40.828063   32084 main.go:130] libmachine: Using SSH client type: native
	I0817 00:45:40.828409   32084 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55238 <nil> <nil>}
	I0817 00:45:40.828532   32084 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:45:41.166103   32084 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:45:41.166103   32084 machine.go:91] provisioned docker machine in 9.4378218s
	I0817 00:45:41.166103   32084 start.go:267] post-start starting for "newest-cni-20210817003608-111344" (driver="docker")
	I0817 00:45:41.166103   32084 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:45:41.175083   32084 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:45:41.180392   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:41.693159   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:45:41.906370   32084 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:45:41.931079   32084 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:45:41.931079   32084 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:45:41.931079   32084 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:45:41.931079   32084 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 00:45:41.931079   32084 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:45:41.931394   32084 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:45:41.932523   32084 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:45:41.940600   32084 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:45:41.978725   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:45:42.103491   32084 start.go:270] post-start completed in 937.3529ms
	I0817 00:45:42.112129   32084 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:45:42.117806   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:42.593249   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:45:42.772691   32084 fix.go:57] fixHost completed within 14.9284466s
	I0817 00:45:42.773259   32084 start.go:80] releasing machines lock for "newest-cni-20210817003608-111344", held for 14.9293017s
	I0817 00:45:42.774551   32084 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817003608-111344
	I0817 00:45:43.258004   32084 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 00:45:43.265353   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:43.272183   32084 ssh_runner.go:149] Run: systemctl --version
	I0817 00:45:43.277823   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:39.841246   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:41.849295   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:43.862305   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:41.484380   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:43.494485   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:45.951504   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:41.516323   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:43.984692   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:45.993241   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:43.809205   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:45:43.824145   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:45:44.000599   32084 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0817 00:45:44.140153   32084 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:45:44.244678   32084 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0817 00:45:44.252490   32084 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 00:45:44.309641   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 00:45:44.393457   32084 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0817 00:45:44.872259   32084 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0817 00:45:45.339132   32084 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:45:45.406016   32084 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:45:45.735092   32084 ssh_runner.go:149] Run: sudo systemctl start docker
	I0817 00:45:45.793843   32084 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:45:46.041552   32084 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:45:46.217642   32084 out.go:204] * Preparing Kubernetes v1.22.0-rc.0 on Docker 20.10.8 ...
	I0817 00:45:46.224542   32084 cli_runner.go:115] Run: docker exec -t newest-cni-20210817003608-111344 dig +short host.docker.internal
	I0817 00:45:47.050680   32084 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0817 00:45:47.065057   32084 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0817 00:45:47.081577   32084 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:45:47.143343   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:47.698141   32084 out.go:177]   - kubelet.network-plugin=cni
	I0817 00:45:47.708019   32084 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0817 00:45:47.709930   32084 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0817 00:45:47.717177   32084 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:45:47.921397   32084 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	k8s.gcr.io/etcd:3.4.13-3
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0817 00:45:47.921963   32084 docker.go:466] Images already preloaded, skipping extraction
	I0817 00:45:47.934186   32084 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:45:48.140483   32084 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	k8s.gcr.io/etcd:3.4.13-3
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0817 00:45:48.140796   32084 cache_images.go:74] Images are preloaded, skipping loading
	I0817 00:45:48.151546   32084 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0817 00:45:46.341479   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:48.849648   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:47.956026   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:50.101426   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:48.020256   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:50.536856   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:48.694678   32084 cni.go:93] Creating CNI manager for ""
	I0817 00:45:48.694812   32084 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:45:48.694812   32084 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0817 00:45:48.695037   32084 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210817003608-111344 NodeName:newest-cni-20210817003608-111344 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-ele
ct:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 00:45:48.696015   32084 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20210817003608-111344"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 00:45:48.696862   32084 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210817003608-111344 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817003608-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 00:45:48.713164   32084 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0817 00:45:48.751564   32084 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 00:45:48.764823   32084 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 00:45:48.792735   32084 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (421 bytes)
	I0817 00:45:48.888275   32084 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 00:45:48.972202   32084 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0817 00:45:49.037888   32084 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0817 00:45:49.055533   32084 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:45:49.106622   32084 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344 for IP: 192.168.76.2
	I0817 00:45:49.107934   32084 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0817 00:45:49.108969   32084 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0817 00:45:49.111072   32084 certs.go:293] skipping minikube-user signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\client.key
	I0817 00:45:49.112162   32084 certs.go:293] skipping minikube signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\apiserver.key.31bdca25
	I0817 00:45:49.112777   32084 certs.go:293] skipping aggregator signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\proxy-client.key
	I0817 00:45:49.114288   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem (1338 bytes)
	W0817 00:45:49.115023   32084 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344_empty.pem, impossibly tiny 0 bytes
	I0817 00:45:49.115023   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0817 00:45:49.115023   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0817 00:45:49.115023   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0817 00:45:49.115763   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0817 00:45:49.115763   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem (1708 bytes)
	I0817 00:45:49.118447   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 00:45:49.208160   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 00:45:49.289995   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 00:45:49.366389   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 00:45:49.504394   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 00:45:49.604638   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 00:45:49.700921   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 00:45:49.869010   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 00:45:49.975665   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /usr/share/ca-certificates/1113442.pem (1708 bytes)
	I0817 00:45:50.072617   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 00:45:50.171974   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem --> /usr/share/ca-certificates/111344.pem (1338 bytes)
	I0817 00:45:50.274351   32084 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 00:45:50.363361   32084 ssh_runner.go:149] Run: openssl version
	I0817 00:45:50.412784   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1113442.pem && ln -fs /usr/share/ca-certificates/1113442.pem /etc/ssl/certs/1113442.pem"
	I0817 00:45:50.473129   32084 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1113442.pem
	I0817 00:45:50.497015   32084 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 23:23 /usr/share/ca-certificates/1113442.pem
	I0817 00:45:50.505673   32084 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1113442.pem
	I0817 00:45:50.536086   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1113442.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 00:45:50.573222   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 00:45:50.635030   32084 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:45:50.654493   32084 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:45:50.667577   32084 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:45:50.707175   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 00:45:50.754825   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111344.pem && ln -fs /usr/share/ca-certificates/111344.pem /etc/ssl/certs/111344.pem"
	I0817 00:45:50.815779   32084 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/111344.pem
	I0817 00:45:50.841817   32084 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 23:23 /usr/share/ca-certificates/111344.pem
	I0817 00:45:50.853438   32084 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111344.pem
	I0817 00:45:50.897266   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/111344.pem /etc/ssl/certs/51391683.0"
	I0817 00:45:50.926628   32084 kubeadm.go:390] StartCluster: {Name:newest-cni-20210817003608-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817003608-111344 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false ku
belet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:45:50.934714   32084 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:45:51.099709   32084 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 00:45:51.129885   32084 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 00:45:51.130346   32084 kubeadm.go:600] restartCluster start
	I0817 00:45:51.143768   32084 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 00:45:51.179983   32084 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:51.191763   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:51.748434   32084 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210817003608-111344" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:45:51.749549   32084 kubeconfig.go:128] "newest-cni-20210817003608-111344" context is missing from C:\Users\jenkins\minikube-integration\kubeconfig - will repair!
	I0817 00:45:51.751035   32084 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:45:51.787206   32084 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 00:45:51.829105   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:51.836963   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:51.887764   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:52.089010   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:52.099000   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:52.171337   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:52.288281   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:52.296326   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:52.349113   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:52.488435   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:52.497714   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:52.551953   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:52.687965   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:52.704517   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:52.765053   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:52.888068   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:52.895815   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:52.946201   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:53.088727   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:53.099855   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:53.152272   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:53.288751   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:53.296904   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:53.363566   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:50.917071   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:53.374128   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:52.522371   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:54.978723   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:53.012282   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:55.492296   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:53.491963   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:53.498651   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:53.563947   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:53.689686   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:53.696729   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:53.759784   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:53.887952   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:53.896348   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:53.939449   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.088197   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.104444   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:54.173323   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.288922   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.306176   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:54.381983   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.487959   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.496118   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:54.554163   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.688716   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.698795   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:54.784270   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.888205   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.896555   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:54.954971   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.955114   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.963789   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:55.024404   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:55.024536   32084 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 00:45:55.024536   32084 kubeadm.go:1032] stopping kube-system containers ...
	I0817 00:45:55.031833   32084 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:45:55.230911   32084 docker.go:367] Stopping containers: [3f898bd77d20 f2fc204cb173 20f37dcaff3d 6c2da5a2baca 1ad258249d60 dba5f9da8cf1 9a8baa9115fe aac407980692 25fbe133425d 28755f53d020 1737f02b01d3 41d3f9624cd5 11d218d1e749 00e51ba67e2f 75b1c13f00ae]
	I0817 00:45:55.235924   32084 ssh_runner.go:149] Run: docker stop 3f898bd77d20 f2fc204cb173 20f37dcaff3d 6c2da5a2baca 1ad258249d60 dba5f9da8cf1 9a8baa9115fe aac407980692 25fbe133425d 28755f53d020 1737f02b01d3 41d3f9624cd5 11d218d1e749 00e51ba67e2f 75b1c13f00ae
	I0817 00:45:55.438973   32084 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 00:45:55.536943   32084 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 00:45:55.566535   32084 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 00:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 17 00:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Aug 17 00:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 17 00:43 /etc/kubernetes/scheduler.conf
	
	I0817 00:45:55.566535   32084 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 00:45:55.629779   32084 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 00:45:55.682971   32084 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 00:45:55.716675   32084 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:55.728977   32084 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 00:45:55.761792   32084 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 00:45:55.806664   32084 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:55.821245   32084 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 00:45:55.875139   32084 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 00:45:55.915980   32084 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 00:45:55.915980   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:45:56.347060   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:45:55.837502   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:58.342115   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:56.998560   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:59.072434   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:57.540287   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:00.038186   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:00.743298   32084 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (4.3960708s)
	I0817 00:46:00.743298   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:46:01.482415   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:46:01.934354   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:46:02.439388   32084 api_server.go:50] waiting for apiserver process to appear ...
	I0817 00:46:02.448554   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:03.049143   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:00.835758   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:03.345507   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:01.497628   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:03.978476   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:06.008050   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:02.518845   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:04.519344   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:03.548680   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:04.055022   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:04.550028   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:05.041585   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:05.556288   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:06.050716   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:06.550166   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:07.052075   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:07.551559   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:08.052353   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:05.880341   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:07.904858   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:08.501085   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:07.040502   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:09.527921   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:08.553281   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:09.051310   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:09.553828   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:10.051378   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:10.549209   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:11.050319   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:11.549585   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:12.049932   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:12.549007   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:10.350111   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:12.362842   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:11.161033   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:12.022480   73600 pod_ready.go:92] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:12.022641   73600 pod_ready.go:81] duration metric: took 54.157539s waiting for pod "cilium-zt4nw" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:12.022641   73600 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-5kk5g" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:14.182822   73600 pod_ready.go:102] pod "coredns-558bd4d5db-5kk5g" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:15.706393   73600 pod_ready.go:92] pod "coredns-558bd4d5db-5kk5g" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:15.706657   73600 pod_ready.go:81] duration metric: took 3.6838761s waiting for pod "coredns-558bd4d5db-5kk5g" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.706657   73600 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-cvwp2" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.725079   73600 pod_ready.go:97] error getting pod "coredns-558bd4d5db-cvwp2" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-cvwp2" not found
	I0817 00:46:15.725079   73600 pod_ready.go:81] duration metric: took 18.4211ms waiting for pod "coredns-558bd4d5db-cvwp2" in "kube-system" namespace to be "Ready" ...
	E0817 00:46:15.725079   73600 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-cvwp2" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-cvwp2" not found
	I0817 00:46:15.725079   73600 pod_ready.go:78] waiting up to 5m0s for pod "etcd-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.778381   73600 pod_ready.go:92] pod "etcd-cilium-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:15.778381   73600 pod_ready.go:81] duration metric: took 53.3005ms waiting for pod "etcd-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.778381   73600 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.808209   73600 pod_ready.go:92] pod "kube-apiserver-cilium-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:15.808209   73600 pod_ready.go:81] duration metric: took 29.8269ms waiting for pod "kube-apiserver-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.808209   73600 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.836016   73600 pod_ready.go:92] pod "kube-controller-manager-cilium-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:15.836016   73600 pod_ready.go:81] duration metric: took 27.8061ms waiting for pod "kube-controller-manager-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.836016   73600 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-mjrwl" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.868094   73600 pod_ready.go:92] pod "kube-proxy-mjrwl" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:15.868345   73600 pod_ready.go:81] duration metric: took 32.0758ms waiting for pod "kube-proxy-mjrwl" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.868345   73600 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:11.998831   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:14.524593   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:13.611184   32084 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.0621368s)
	I0817 00:46:14.053183   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:14.702043   32084 api_server.go:70] duration metric: took 12.262189s to wait for apiserver process to appear ...
	I0817 00:46:14.702043   32084 api_server.go:86] waiting for apiserver healthz status ...
	I0817 00:46:14.702277   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:14.836191   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:16.850275   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:16.245972   73600 pod_ready.go:92] pod "kube-scheduler-cilium-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:16.245972   73600 pod_ready.go:81] duration metric: took 377.6128ms waiting for pod "kube-scheduler-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:16.245972   73600 pod_ready.go:38] duration metric: took 3m43.0608939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 00:46:16.245972   73600 api_server.go:50] waiting for apiserver process to appear ...
	I0817 00:46:16.254642   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0817 00:46:17.179098   73600 logs.go:270] 1 containers: [3e5f0181aa79]
	I0817 00:46:17.186930   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0817 00:46:17.970933   73600 logs.go:270] 1 containers: [f6eb6c2452d6]
	I0817 00:46:17.984433   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0817 00:46:18.730136   73600 logs.go:270] 1 containers: [7f3c95d6335f]
	I0817 00:46:18.743186   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0817 00:46:19.113184   73600 logs.go:270] 1 containers: [b87de0ae0f76]
	I0817 00:46:19.116872   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0817 00:46:19.433891   73600 logs.go:270] 1 containers: [fa25c8fed512]
	I0817 00:46:19.440720   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0817 00:46:19.781074   73600 logs.go:270] 0 containers: []
	W0817 00:46:19.781074   73600 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 00:46:19.788121   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0817 00:46:20.055550   73600 logs.go:270] 2 containers: [00638b764dd3 4306a97290a5]
	I0817 00:46:20.062273   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0817 00:46:20.267469   73600 logs.go:270] 1 containers: [60b439d9ae55]
	I0817 00:46:20.267469   73600 logs.go:123] Gathering logs for kube-scheduler [b87de0ae0f76] ...
	I0817 00:46:20.267469   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 b87de0ae0f76"
	I0817 00:46:20.607372   73600 logs.go:123] Gathering logs for kube-proxy [fa25c8fed512] ...
	I0817 00:46:20.607372   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 fa25c8fed512"
	I0817 00:46:20.814801   73600 logs.go:123] Gathering logs for Docker ...
	I0817 00:46:20.815022   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0817 00:46:20.949194   73600 logs.go:123] Gathering logs for container status ...
	I0817 00:46:20.949280   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 00:46:17.008876   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:19.479603   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:19.704506   32084 api_server.go:255] stopped: https://127.0.0.1:55235/healthz: Get "https://127.0.0.1:55235/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 00:46:20.205864   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:19.331549   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:21.347652   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:23.357877   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:21.347652   73600 logs.go:123] Gathering logs for kubelet ...
	I0817 00:46:21.347652   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 00:46:21.794338   73600 logs.go:123] Gathering logs for dmesg ...
	I0817 00:46:21.794338   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 00:46:21.879924   73600 logs.go:123] Gathering logs for describe nodes ...
	I0817 00:46:21.879924   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 00:46:23.152324   73600 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1.2723515s)
	I0817 00:46:23.159724   73600 logs.go:123] Gathering logs for etcd [f6eb6c2452d6] ...
	I0817 00:46:23.160039   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f6eb6c2452d6"
	I0817 00:46:23.661559   73600 logs.go:123] Gathering logs for kube-controller-manager [60b439d9ae55] ...
	I0817 00:46:23.661789   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 60b439d9ae55"
	I0817 00:46:23.932868   73600 logs.go:123] Gathering logs for kube-apiserver [3e5f0181aa79] ...
	I0817 00:46:23.932868   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 3e5f0181aa79"
	I0817 00:46:24.272047   73600 logs.go:123] Gathering logs for coredns [7f3c95d6335f] ...
	I0817 00:46:24.272047   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 7f3c95d6335f"
	I0817 00:46:24.737143   73600 logs.go:123] Gathering logs for storage-provisioner [00638b764dd3] ...
	I0817 00:46:24.737376   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 00638b764dd3"
	I0817 00:46:25.048961   73600 logs.go:123] Gathering logs for storage-provisioner [4306a97290a5] ...
	I0817 00:46:25.048961   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 4306a97290a5"
	I0817 00:46:21.526574   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:24.004490   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:25.207878   32084 api_server.go:255] stopped: https://127.0.0.1:55235/healthz: Get "https://127.0.0.1:55235/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 00:46:25.705788   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:25.841847   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:27.862155   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:27.888584   73600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:28.065581   73600 api_server.go:70] duration metric: took 3m57.8992528s to wait for apiserver process to appear ...
	I0817 00:46:28.065581   73600 api_server.go:86] waiting for apiserver healthz status ...
	I0817 00:46:28.075727   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0817 00:46:28.391871   73600 logs.go:270] 1 containers: [3e5f0181aa79]
	I0817 00:46:28.398456   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0817 00:46:28.613745   73600 logs.go:270] 1 containers: [f6eb6c2452d6]
	I0817 00:46:28.620248   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0817 00:46:28.989655   73600 logs.go:270] 1 containers: [7f3c95d6335f]
	I0817 00:46:28.998091   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0817 00:46:29.204892   73600 logs.go:270] 1 containers: [b87de0ae0f76]
	I0817 00:46:29.212715   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0817 00:46:29.511393   73600 logs.go:270] 1 containers: [fa25c8fed512]
	I0817 00:46:29.521072   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0817 00:46:29.984646   73600 logs.go:270] 0 containers: []
	W0817 00:46:29.984748   73600 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 00:46:29.989855   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0817 00:46:30.477193   73600 logs.go:270] 2 containers: [00638b764dd3 4306a97290a5]
	I0817 00:46:30.485331   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0817 00:46:30.722292   73600 logs.go:270] 1 containers: [60b439d9ae55]
	I0817 00:46:30.723041   73600 logs.go:123] Gathering logs for dmesg ...
	I0817 00:46:30.723041   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 00:46:30.862539   73600 logs.go:123] Gathering logs for etcd [f6eb6c2452d6] ...
	I0817 00:46:30.862539   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f6eb6c2452d6"
	I0817 00:46:26.648679   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:29.012238   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:30.706602   32084 api_server.go:255] stopped: https://127.0.0.1:55235/healthz: Get "https://127.0.0.1:55235/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 00:46:31.207000   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:32.625953   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 00:46:32.626349   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 00:46:32.706862   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:32.775262   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0817 00:46:32.776090   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0817 00:46:33.207542   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:30.369003   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:32.387168   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:31.832925   73600 logs.go:123] Gathering logs for coredns [7f3c95d6335f] ...
	I0817 00:46:31.832925   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 7f3c95d6335f"
	I0817 00:46:32.415612   73600 logs.go:123] Gathering logs for kube-scheduler [b87de0ae0f76] ...
	I0817 00:46:32.415612   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 b87de0ae0f76"
	I0817 00:46:32.795088   73600 logs.go:123] Gathering logs for storage-provisioner [00638b764dd3] ...
	I0817 00:46:32.795088   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 00638b764dd3"
	I0817 00:46:33.577947   73600 logs.go:123] Gathering logs for kube-controller-manager [60b439d9ae55] ...
	I0817 00:46:33.577947   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 60b439d9ae55"
	I0817 00:46:34.494677   73600 logs.go:123] Gathering logs for kubelet ...
	I0817 00:46:34.494677   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 00:46:34.895045   73600 logs.go:123] Gathering logs for describe nodes ...
	I0817 00:46:34.895045   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 00:46:31.490316   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:33.504774   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:35.523525   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:33.458527   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:33.459399   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:33.707362   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:33.766395   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:33.766395   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:34.207244   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:34.323362   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:34.323362   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:34.706703   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:34.874066   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:34.874066   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:35.206367   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:35.598099   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:35.598099   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:35.705988   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:35.801628   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:35.801628   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:36.207589   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:36.272876   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:36.272876   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:36.706430   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:36.773297   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:36.773627   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:37.205763   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:37.426173   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:37.426590   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:37.706882   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:37.765031   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:37.765595   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:38.205941   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:34.405921   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:36.835838   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:38.884593   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:36.745082   73600 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1.8499671s)
	I0817 00:46:36.749479   73600 logs.go:123] Gathering logs for kube-apiserver [3e5f0181aa79] ...
	I0817 00:46:36.749479   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 3e5f0181aa79"
	I0817 00:46:37.499547   73600 logs.go:123] Gathering logs for kube-proxy [fa25c8fed512] ...
	I0817 00:46:37.499547   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 fa25c8fed512"
	I0817 00:46:37.908879   73600 logs.go:123] Gathering logs for storage-provisioner [4306a97290a5] ...
	I0817 00:46:37.908879   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 4306a97290a5"
	I0817 00:46:38.209224   73600 logs.go:123] Gathering logs for Docker ...
	I0817 00:46:38.209224   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0817 00:46:38.329477   73600 logs.go:123] Gathering logs for container status ...
	I0817 00:46:38.329477   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 00:46:37.990935   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:40.486594   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:38.389726   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:38.397465   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:38.712244   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:38.791985   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:38.793011   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:39.206377   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:39.306382   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:39.307513   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:39.706101   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:39.765359   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:39.765674   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:40.212070   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:40.248228   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 200:
	ok
	I0817 00:46:40.308529   32084 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 00:46:40.308666   32084 api_server.go:129] duration metric: took 25.605513s to wait for apiserver health ...
	I0817 00:46:40.308666   32084 cni.go:93] Creating CNI manager for ""
	I0817 00:46:40.308666   32084 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:46:40.308864   32084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 00:46:40.375820   32084 system_pods.go:59] 8 kube-system pods found
	I0817 00:46:40.375820   32084 system_pods.go:61] "coredns-78fcd69978-4rqlg" [e31d4e8c-dd23-45cf-9a37-aba902e87d97] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 00:46:40.375820   32084 system_pods.go:61] "etcd-newest-cni-20210817003608-111344" [43d91330-4f5d-46ac-aef5-352c59424787] Running
	I0817 00:46:40.375820   32084 system_pods.go:61] "kube-apiserver-newest-cni-20210817003608-111344" [b6d309fd-9aa2-45b7-aab0-caa42b6e983c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 00:46:40.375820   32084 system_pods.go:61] "kube-controller-manager-newest-cni-20210817003608-111344" [8d7f557b-d69d-4017-a638-ec780cd4ccf3] Running
	I0817 00:46:40.375820   32084 system_pods.go:61] "kube-proxy-9nj8l" [de7a7f83-5225-4d60-9fba-e7b0c120247f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 00:46:40.375820   32084 system_pods.go:61] "kube-scheduler-newest-cni-20210817003608-111344" [199c0871-b83b-4083-8f3a-05523bb205dd] Running
	I0817 00:46:40.375820   32084 system_pods.go:61] "metrics-server-7c784ccb57-vkvfp" [9ec6eb01-a852-4f2e-a8bb-0d9888bcf668] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 00:46:40.375820   32084 system_pods.go:61] "storage-provisioner" [af23beac-6b23-4a97-9b39-7db56aa9f154] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 00:46:40.375820   32084 system_pods.go:74] duration metric: took 66.954ms to wait for pod list to return data ...
	I0817 00:46:40.376494   32084 node_conditions.go:102] verifying NodePressure condition ...
	I0817 00:46:40.410975   32084 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0817 00:46:40.411230   32084 node_conditions.go:123] node cpu capacity is 4
	I0817 00:46:40.411230   32084 node_conditions.go:105] duration metric: took 34.7342ms to run NodePressure ...
	I0817 00:46:40.411230   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:46:41.388882   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:43.509723   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:45.559702   32084 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (5.1481321s)
	I0817 00:46:45.560825   32084 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 00:46:45.686116   32084 ops.go:34] apiserver oom_adj: -16
	I0817 00:46:45.686210   32084 kubeadm.go:604] restartCluster took 54.5535543s
	I0817 00:46:45.686210   32084 kubeadm.go:392] StartCluster complete in 54.7575014s
	I0817 00:46:45.686378   32084 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:46:45.686701   32084 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:46:45.699752   32084 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:46:45.803723   32084 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210817003608-111344" rescaled to 1
	I0817 00:46:45.804161   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 00:46:45.804324   32084 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0817 00:46:45.804161   32084 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0817 00:46:45.804580   32084 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210817003608-111344"
	I0817 00:46:45.804580   32084 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210817003608-111344"
	I0817 00:46:45.804580   32084 addons.go:59] Setting dashboard=true in profile "newest-cni-20210817003608-111344"
	I0817 00:46:45.804681   32084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210817003608-111344"
	I0817 00:46:45.804864   32084 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210817003608-111344"
	W0817 00:46:45.804864   32084 addons.go:147] addon storage-provisioner should already be in state true
	I0817 00:46:45.804864   32084 addons.go:135] Setting addon dashboard=true in "newest-cni-20210817003608-111344"
	I0817 00:46:41.291462   73600 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55215/healthz ...
	I0817 00:46:41.363328   73600 api_server.go:265] https://127.0.0.1:55215/healthz returned 200:
	ok
	I0817 00:46:41.372925   73600 api_server.go:139] control plane version: v1.21.3
	I0817 00:46:41.373110   73600 api_server.go:129] duration metric: took 13.3070227s to wait for apiserver health ...
	I0817 00:46:41.373110   73600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 00:46:41.373333   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0817 00:46:42.070778   73600 logs.go:270] 1 containers: [3e5f0181aa79]
	I0817 00:46:42.079041   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0817 00:46:42.548958   73600 logs.go:270] 1 containers: [f6eb6c2452d6]
	I0817 00:46:42.556190   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0817 00:46:42.958008   73600 logs.go:270] 1 containers: [7f3c95d6335f]
	I0817 00:46:42.965538   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0817 00:46:43.436064   73600 logs.go:270] 1 containers: [b87de0ae0f76]
	I0817 00:46:43.442581   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0817 00:46:43.720813   73600 logs.go:270] 1 containers: [fa25c8fed512]
	I0817 00:46:43.727210   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0817 00:46:44.346327   73600 logs.go:270] 0 containers: []
	W0817 00:46:44.346515   73600 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 00:46:44.353865   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0817 00:46:44.778502   73600 logs.go:270] 2 containers: [00638b764dd3 4306a97290a5]
	I0817 00:46:44.784792   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0817 00:46:45.235819   73600 logs.go:270] 1 containers: [60b439d9ae55]
	I0817 00:46:45.235819   73600 logs.go:123] Gathering logs for kube-scheduler [b87de0ae0f76] ...
	I0817 00:46:45.235819   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 b87de0ae0f76"
	I0817 00:46:45.750111   73600 logs.go:123] Gathering logs for kube-proxy [fa25c8fed512] ...
	I0817 00:46:45.750276   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 fa25c8fed512"
	I0817 00:46:42.996270   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:45.010647   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:45.806645   32084 out.go:177] * Verifying Kubernetes components...
	I0817 00:46:45.804681   32084 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210817003608-111344"
	W0817 00:46:45.804864   32084 addons.go:147] addon dashboard should already be in state true
	I0817 00:46:45.805166   32084 host.go:66] Checking if "newest-cni-20210817003608-111344" exists ...
	I0817 00:46:45.805665   32084 config.go:177] Loaded profile config "newest-cni-20210817003608-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.0-rc.0
	I0817 00:46:45.806645   32084 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210817003608-111344"
	W0817 00:46:45.806645   32084 addons.go:147] addon metrics-server should already be in state true
	I0817 00:46:45.807318   32084 host.go:66] Checking if "newest-cni-20210817003608-111344" exists ...
	I0817 00:46:45.807318   32084 host.go:66] Checking if "newest-cni-20210817003608-111344" exists ...
	I0817 00:46:45.819240   32084 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 00:46:45.828044   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:46:45.829061   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:46:45.831847   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:46:45.834894   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:46:46.537061   32084 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0817 00:46:46.537645   32084 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 00:46:46.537645   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 00:46:46.543937   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:46:46.550789   32084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:46:46.550789   32084 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:46:46.550789   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 00:46:46.558996   32084 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0817 00:46:46.557994   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:46:46.561044   32084 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0817 00:46:46.561044   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 00:46:46.561044   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 00:46:46.567008   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:46:46.719798   32084 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210817003608-111344"
	W0817 00:46:46.719934   32084 addons.go:147] addon default-storageclass should already be in state true
	I0817 00:46:46.726163   32084 host.go:66] Checking if "newest-cni-20210817003608-111344" exists ...
	I0817 00:46:46.737914   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:46:47.127096   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:46:47.138097   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:46:47.148226   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:46:47.314097   32084 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 00:46:47.314228   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 00:46:47.325366   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:46:47.827549   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:46:45.853324   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:47.854527   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:46.485646   73600 logs.go:123] Gathering logs for storage-provisioner [00638b764dd3] ...
	I0817 00:46:46.485646   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 00638b764dd3"
	I0817 00:46:47.452760   73600 logs.go:123] Gathering logs for storage-provisioner [4306a97290a5] ...
	I0817 00:46:47.452878   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 4306a97290a5"
	I0817 00:46:47.807624   73600 logs.go:123] Gathering logs for container status ...
	I0817 00:46:47.807624   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 00:46:48.155130   73600 logs.go:123] Gathering logs for kubelet ...
	I0817 00:46:48.155334   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 00:46:48.529780   73600 logs.go:123] Gathering logs for describe nodes ...
	I0817 00:46:48.530783   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 00:46:49.969401   73600 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1.4385634s)
	I0817 00:46:49.974005   73600 logs.go:123] Gathering logs for coredns [7f3c95d6335f] ...
	I0817 00:46:49.974118   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 7f3c95d6335f"
	I0817 00:46:50.253902   73600 logs.go:123] Gathering logs for kube-controller-manager [60b439d9ae55] ...
	I0817 00:46:50.253902   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 60b439d9ae55"
	I0817 00:46:50.703420   73600 logs.go:123] Gathering logs for Docker ...
	I0817 00:46:50.703420   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0817 00:46:50.884512   73600 logs.go:123] Gathering logs for dmesg ...
	I0817 00:46:50.884512   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 00:46:50.983627   73600 logs.go:123] Gathering logs for kube-apiserver [3e5f0181aa79] ...
	I0817 00:46:50.983838   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 3e5f0181aa79"
	I0817 00:46:47.286051   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:49.518431   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:48.853882   32084 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (3.0345266s)
	I0817 00:46:48.854093   32084 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (3.0496047s)
	I0817 00:46:48.854974   32084 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 00:46:48.860925   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:46:49.072190   32084 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 00:46:49.072190   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0817 00:46:49.356794   32084 api_server.go:50] waiting for apiserver process to appear ...
	I0817 00:46:49.365400   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:49.435495   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 00:46:49.436485   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 00:46:49.499337   32084 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:46:49.517915   32084 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 00:46:49.518128   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 00:46:49.654719   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 00:46:49.654719   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 00:46:49.678455   32084 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 00:46:49.782692   32084 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 00:46:49.782692   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 00:46:50.379506   32084 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 00:46:50.578199   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 00:46:50.578199   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 00:46:50.794101   32084 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.4286462s)
	I0817 00:46:50.794202   32084 api_server.go:70] duration metric: took 4.9895225s to wait for apiserver process to appear ...
	I0817 00:46:50.794202   32084 api_server.go:86] waiting for apiserver healthz status ...
	I0817 00:46:50.794202   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:50.832268   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 200:
	ok
	I0817 00:46:50.839314   32084 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 00:46:50.839314   32084 api_server.go:129] duration metric: took 45.1101ms to wait for apiserver health ...
	I0817 00:46:50.839314   32084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 00:46:50.912069   32084 system_pods.go:59] 8 kube-system pods found
	I0817 00:46:50.912199   32084 system_pods.go:61] "coredns-78fcd69978-4rqlg" [e31d4e8c-dd23-45cf-9a37-aba902e87d97] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 00:46:50.912199   32084 system_pods.go:61] "etcd-newest-cni-20210817003608-111344" [43d91330-4f5d-46ac-aef5-352c59424787] Running
	I0817 00:46:50.912199   32084 system_pods.go:61] "kube-apiserver-newest-cni-20210817003608-111344" [b6d309fd-9aa2-45b7-aab0-caa42b6e983c] Running
	I0817 00:46:50.912199   32084 system_pods.go:61] "kube-controller-manager-newest-cni-20210817003608-111344" [8d7f557b-d69d-4017-a638-ec780cd4ccf3] Running
	I0817 00:46:50.912199   32084 system_pods.go:61] "kube-proxy-9nj8l" [de7a7f83-5225-4d60-9fba-e7b0c120247f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 00:46:50.912199   32084 system_pods.go:61] "kube-scheduler-newest-cni-20210817003608-111344" [199c0871-b83b-4083-8f3a-05523bb205dd] Running
	I0817 00:46:50.912199   32084 system_pods.go:61] "metrics-server-7c784ccb57-vkvfp" [9ec6eb01-a852-4f2e-a8bb-0d9888bcf668] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 00:46:50.912199   32084 system_pods.go:61] "storage-provisioner" [af23beac-6b23-4a97-9b39-7db56aa9f154] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 00:46:50.912199   32084 system_pods.go:74] duration metric: took 72.8823ms to wait for pod list to return data ...
	I0817 00:46:50.912199   32084 default_sa.go:34] waiting for default service account to be created ...
	I0817 00:46:50.939211   32084 default_sa.go:45] found service account: "default"
	I0817 00:46:50.939211   32084 default_sa.go:55] duration metric: took 27.0106ms for default service account to be created ...
	I0817 00:46:50.939211   32084 kubeadm.go:547] duration metric: took 5.1345255s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0817 00:46:50.939456   32084 node_conditions.go:102] verifying NodePressure condition ...
	I0817 00:46:50.965726   32084 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0817 00:46:50.965726   32084 node_conditions.go:123] node cpu capacity is 4
	I0817 00:46:50.965726   32084 node_conditions.go:105] duration metric: took 26.2689ms to run NodePressure ...
	I0817 00:46:50.965726   32084 start.go:231] waiting for startup goroutines ...
	I0817 00:46:51.391185   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 00:46:51.391185   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0817 00:46:52.136529   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 00:46:52.136529   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 00:46:52.879752   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 00:46:52.879752   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 00:46:49.873540   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:52.420534   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:51.245221   73600 logs.go:123] Gathering logs for etcd [f6eb6c2452d6] ...
	I0817 00:46:51.245221   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f6eb6c2452d6"
	I0817 00:46:54.268623   73600 system_pods.go:59] 9 kube-system pods found
	I0817 00:46:54.268764   73600 system_pods.go:61] "cilium-operator-99d899fb5-47tqd" [282c6ed0-512e-4527-8abd-c20b109a3ab5] Running
	I0817 00:46:54.268764   73600 system_pods.go:61] "cilium-zt4nw" [e6d28534-126f-46ed-a6f4-4f547e173b18] Running
	I0817 00:46:54.268764   73600 system_pods.go:61] "coredns-558bd4d5db-5kk5g" [b9fac283-fb2e-4da6-882b-f1e25b1a063f] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "etcd-cilium-20210817002204-111344" [0c40eb71-82aa-45bd-80d2-de25bb50aa30] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "kube-apiserver-cilium-20210817002204-111344" [c8d0631b-2b14-4310-81a8-ea94e8ef2a3f] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "kube-controller-manager-cilium-20210817002204-111344" [76bc8068-f9f7-44dc-b298-93ac3f8cce97] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "kube-proxy-mjrwl" [2c253bdb-59d9-4892-bbc7-900370c9783d] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "kube-scheduler-cilium-20210817002204-111344" [849b2404-ace6-4909-9c24-4842549362b8] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "storage-provisioner" [8cfa9260-52d1-4533-aa01-7d71b7565697] Running
	I0817 00:46:54.268860   73600 system_pods.go:74] duration metric: took 12.8952597s to wait for pod list to return data ...
	I0817 00:46:54.268860   73600 default_sa.go:34] waiting for default service account to be created ...
	I0817 00:46:54.272898   73600 default_sa.go:45] found service account: "default"
	I0817 00:46:54.272898   73600 default_sa.go:55] duration metric: took 4.0387ms for default service account to be created ...
	I0817 00:46:54.272898   73600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 00:46:54.308094   73600 system_pods.go:86] 9 kube-system pods found
	I0817 00:46:54.308201   73600 system_pods.go:89] "cilium-operator-99d899fb5-47tqd" [282c6ed0-512e-4527-8abd-c20b109a3ab5] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "cilium-zt4nw" [e6d28534-126f-46ed-a6f4-4f547e173b18] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "coredns-558bd4d5db-5kk5g" [b9fac283-fb2e-4da6-882b-f1e25b1a063f] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "etcd-cilium-20210817002204-111344" [0c40eb71-82aa-45bd-80d2-de25bb50aa30] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "kube-apiserver-cilium-20210817002204-111344" [c8d0631b-2b14-4310-81a8-ea94e8ef2a3f] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "kube-controller-manager-cilium-20210817002204-111344" [76bc8068-f9f7-44dc-b298-93ac3f8cce97] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "kube-proxy-mjrwl" [2c253bdb-59d9-4892-bbc7-900370c9783d] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "kube-scheduler-cilium-20210817002204-111344" [849b2404-ace6-4909-9c24-4842549362b8] Running
	I0817 00:46:54.308303   73600 system_pods.go:89] "storage-provisioner" [8cfa9260-52d1-4533-aa01-7d71b7565697] Running
	I0817 00:46:54.308303   73600 system_pods.go:126] duration metric: took 35.4035ms to wait for k8s-apps to be running ...
	I0817 00:46:54.308395   73600 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 00:46:54.316134   73600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 00:46:54.411712   73600 system_svc.go:56] duration metric: took 103.3128ms WaitForService to wait for kubelet.
	I0817 00:46:54.411864   73600 kubeadm.go:547] duration metric: took 4m24.2443824s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 00:46:54.411864   73600 node_conditions.go:102] verifying NodePressure condition ...
	I0817 00:46:54.420626   73600 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0817 00:46:54.420731   73600 node_conditions.go:123] node cpu capacity is 4
	I0817 00:46:54.420731   73600 node_conditions.go:105] duration metric: took 8.8669ms to run NodePressure ...
	I0817 00:46:54.420826   73600 start.go:231] waiting for startup goroutines ...
	I0817 00:46:54.607082   73600 start.go:462] kubectl: 1.20.0, cluster: 1.21.3 (minor skew: 1)
	I0817 00:46:54.609311   73600 out.go:177] * Done! kubectl is now configured to use "cilium-20210817002204-111344" cluster and "default" namespace by default
	I0817 00:46:51.527655   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:53.532789   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:55.996605   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:53.568801   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 00:46:53.568801   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 00:46:53.672945   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 00:46:53.672945   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 00:46:53.953306   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 00:46:53.953443   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 00:46:54.511731   32084 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 00:46:55.182612   32084 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.6830586s)
	I0817 00:46:55.188979   32084 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.5103147s)
	I0817 00:46:55.829821   32084 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.4501077s)
	I0817 00:46:55.829821   32084 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210817003608-111344"
	I0817 00:46:58.767442   32084 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.2555493s)
	I0817 00:46:58.769937   32084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0817 00:46:58.770219   32084 addons.go:344] enableAddons completed in 12.9655654s
	I0817 00:46:58.919436   32084 start.go:462] kubectl: 1.20.0, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0817 00:46:58.927300   32084 out.go:177] 
	W0817 00:46:58.927563   32084 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0817 00:46:58.929374   32084 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0817 00:46:58.931225   32084 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210817003608-111344" cluster and "default" namespace by default
	I0817 00:46:54.877671   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:57.352071   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:58.020700   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:00.026526   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:59.367541   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:01.852774   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:03.864430   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:02.517865   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:04.986466   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:06.340126   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:08.371110   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:07.059721   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:09.533454   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:10.853228   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:13.438052   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:12.007117   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:14.014736   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:15.863766   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:18.371799   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:16.513999   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:18.573976   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:21.013036   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2021-08-17 00:45:32 UTC, end at Tue 2021-08-17 00:47:29 UTC. --
	Aug 17 00:45:33 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:33.231370800Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 17 00:45:33 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:33.245625500Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 17 00:45:33 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:33.256457000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 17 00:45:33 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:33.256691700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 17 00:45:33 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:33.265043700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 17 00:45:33 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:33.324217200Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 17 00:45:33 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:33.389318500Z" level=info msg="Loading containers: start."
	Aug 17 00:45:34 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:34.450412900Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 17 00:45:34 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:34.853836600Z" level=info msg="Loading containers: done."
	Aug 17 00:45:34 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:34.998184300Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
	Aug 17 00:45:34 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:34.998905400Z" level=info msg="Daemon has completed initialization"
	Aug 17 00:45:35 newest-cni-20210817003608-111344 systemd[1]: Started Docker Application Container Engine.
	Aug 17 00:45:35 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:35.256494900Z" level=info msg="API listen on [::]:2376"
	Aug 17 00:45:35 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:35.282827000Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 17 00:46:51 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:46:51.588664900Z" level=info msg="ignoring event" container=3bfef131d1b1f12f04d417d281804d41ec5102023e2def746de638cf3b080afb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:46:52 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:46:52.018776900Z" level=info msg="ignoring event" container=07dad6a063d0f793a01c019f42cb72df251e320b5837881ebc2f18d6e5d4e202 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:46:59 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:46:59.301831600Z" level=info msg="ignoring event" container=f9dcbc7254771bbc72ff6f191cf9f076caf32f5d0808b9fbbc6db3438eb4799c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:02 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:02.187379300Z" level=info msg="ignoring event" container=8af96648d391d00e142cd622831c0404356e6a3e02d40e41d1e668eefa23e4aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:06 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:06.738851300Z" level=info msg="ignoring event" container=f4aacd3d844a6f76a85bffd1e77f606e1b4b56ac6fbb4bf89e4e0f524eb3352d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:08 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:08.621499400Z" level=info msg="ignoring event" container=580b18e7b3a2e91b3e510c7485e65413ec7a8e8047c86bd5dacc44ea99ac8d82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:09 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:09.396246400Z" level=info msg="ignoring event" container=dff9c779de811385e6299833728dc4cefa70859d314ba7e65509a181c578393b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:15 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:15.012088400Z" level=info msg="ignoring event" container=3aeab7968c57c3e917a7e763720ad196010cd81506d99bd82e8f4ebd535baed0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:20 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:20.420655700Z" level=info msg="ignoring event" container=c073b9be55aed2ad08d5398d7570bc0434e302669f76f96e3dd696d10d6b6e25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:22 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:22.006288200Z" level=info msg="ignoring event" container=e611ec2d84688b3925132b85a4323c3bba873c1fabb203e4990e8b23e29cc5d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:25 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:25.876752500Z" level=info msg="ignoring event" container=4f9627e4c643ff84f76dc44391c24481fed8843365dbe644c8217b1591dfb81e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	03ba21d79411d       ea6b13ed84e03       22 seconds ago       Running             kube-proxy                1                   0c0bd04ab0c04
	2333add2d120c       6e38f40d628db       27 seconds ago       Running             storage-provisioner       1                   9182ee79e2b41
	f5faeb9a923fd       b2462aa94d403       About a minute ago   Running             kube-apiserver            1                   27d16d443fe31
	e01962e6badd5       0048118155842       About a minute ago   Running             etcd                      1                   f57cd0e8764a4
	8e314399ffbfa       cf9cba6c3e4a8       About a minute ago   Running             kube-controller-manager   1                   ba4ab5e4b6029
	90a9fc57f1904       7da2efaa5b480       About a minute ago   Running             kube-scheduler            1                   7b1abe2d483fa
	20f37dcaff3d1       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       0                   6c2da5a2baca3
	1ad258249d601       ea6b13ed84e03       2 minutes ago        Exited              kube-proxy                0                   9a8baa9115fed
	aac407980692a       cf9cba6c3e4a8       3 minutes ago        Exited              kube-controller-manager   0                   41d3f9624cd5c
	25fbe133425d8       7da2efaa5b480       3 minutes ago        Exited              kube-scheduler            0                   75b1c13f00ae4
	28755f53d020c       b2462aa94d403       3 minutes ago        Exited              kube-apiserver            0                   11d218d1e7490
	1737f02b01d38       0048118155842       3 minutes ago        Exited              etcd                      0                   00e51ba67e2f8
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20210817003608-111344
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20210817003608-111344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=newest-cni-20210817003608-111344
	                    minikube.k8s.io/updated_at=2021_08_17T00_44_14_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 00:44:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20210817003608-111344
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 00:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 00:46:34 +0000   Tue, 17 Aug 2021 00:44:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 00:46:34 +0000   Tue, 17 Aug 2021 00:44:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 00:46:34 +0000   Tue, 17 Aug 2021 00:44:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 00:46:34 +0000   Tue, 17 Aug 2021 00:44:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-20210817003608-111344
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                815fae9c-df15-489f-a826-e5f5275d966a
	  Boot ID:                    59d49a8b-044c-440e-a1d3-94e728b56235
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-4rqlg                                    100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m1s
	  kube-system                 etcd-newest-cni-20210817003608-111344                       100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m6s
	  kube-system                 kube-apiserver-newest-cni-20210817003608-111344             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 kube-controller-manager-newest-cni-20210817003608-111344    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 kube-proxy-9nj8l                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 kube-scheduler-newest-cni-20210817003608-111344             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 metrics-server-7c784ccb57-vkvfp                             100m (2%!)(MISSING)     0 (0%!)(MISSING)      300Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m34s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-hf47r                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-smdrj                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (21%!)(MISSING)  0 (0%!)(MISSING)
	  memory             470Mi (2%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From     Message
	  ----    ------                   ----                   ----     -------
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m55s)  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  3m53s (x8 over 3m55s)  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x8 over 3m55s)  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 3m14s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m13s                  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s                  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s                  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m13s                  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m9s                   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m                     kubelet  Node newest-cni-20210817003608-111344 status is now: NodeReady
	  Normal  Starting                 88s                    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s                    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  86s (x8 over 88s)      kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x8 over 88s)      kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x7 over 88s)      kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.000044]  hv_stimer0_isr+0x20/0x2d
	[  +0.000053]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000021]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000002]  </IRQ>
	[  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	[  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 6e ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 6e 31 ff e8
	[  +0.000001] RSP: 0018:ffffb51d800a3ec8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	[  +0.000002] RAX: ffffffff91918b30 RBX: 0000000000000001 RCX: ffffffff92253150
	[  +0.000001] RDX: 0000000000171622 RSI: 0000000000000001 RDI: 0000000000000001
	[  +0.000001] RBP: 0000000000000000 R08: 0000007cfc1104b2 R09: 0000000000000002
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: ffff8d162e19ef80 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002]  ? __sched_text_end+0x1/0x1
	[  +0.000021]  ? native_safe_halt+0x5/0x8
	[  +0.000002]  default_idle+0x1b/0x2c
	[  +0.000003]  do_idle+0xe5/0x216
	[  +0.000003]  cpu_startup_entry+0x6f/0x71
	[  +0.000019]  start_secondary+0x18e/0x1a9
	[  +0.000032]  secondary_startup_64+0xa4/0xb0
	[  +0.000020] ---[ end trace b7d34331c4afdfb9 ]---
	[Aug17 00:14] tee (131347): /proc/127190/oom_adj is deprecated, please use /proc/127190/oom_score_adj instead.
	[Aug17 00:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000007] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.100196] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [1737f02b01d3] <==
	* {"level":"info","ts":"2021-08-17T00:43:45.714Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-17T00:43:45.714Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-17T00:43:45.719Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-17T00:43:45.714Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-17T00:43:45.728Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-17T00:43:45.772Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2021-08-17T00:43:45.712Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20210817003608-111344 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-17T00:44:29.158Z","caller":"traceutil/trace.go:171","msg":"trace[736617236] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"184.6827ms","start":"2021-08-17T00:44:28.971Z","end":"2021-08-17T00:44:29.155Z","steps":["trace[736617236] 'process raft request'  (duration: 173.0452ms)","trace[736617236] 'compare'  (duration: 11.5305ms)"],"step_count":2}
	{"level":"info","ts":"2021-08-17T00:44:29.150Z","caller":"traceutil/trace.go:171","msg":"trace[789087407] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"176.3015ms","start":"2021-08-17T00:44:28.971Z","end":"2021-08-17T00:44:29.147Z","steps":["trace[789087407] 'process raft request'  (duration: 81.6486ms)","trace[789087407] 'compare'  (duration: 89.8943ms)"],"step_count":2}
	{"level":"info","ts":"2021-08-17T00:44:56.894Z","caller":"traceutil/trace.go:171","msg":"trace[139588176] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"151.3941ms","start":"2021-08-17T00:44:56.743Z","end":"2021-08-17T00:44:56.894Z","steps":["trace[139588176] 'process raft request'  (duration: 129.8582ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-17T00:44:56.945Z","caller":"traceutil/trace.go:171","msg":"trace[252463044] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"201.9073ms","start":"2021-08-17T00:44:56.743Z","end":"2021-08-17T00:44:56.945Z","steps":["trace[252463044] 'process raft request'  (duration: 167.5164ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-17T00:44:57.506Z","caller":"traceutil/trace.go:171","msg":"trace[132183243] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:512; }","duration":"100.3325ms","start":"2021-08-17T00:44:57.406Z","end":"2021-08-17T00:44:57.506Z","steps":["trace[132183243] 'read index received'  (duration: 21.2019ms)","trace[132183243] 'applied index is now lower than readState.Index'  (duration: 79.129ms)"],"step_count":2}
	{"level":"info","ts":"2021-08-17T00:44:57.507Z","caller":"traceutil/trace.go:171","msg":"trace[1559180452] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"103.3911ms","start":"2021-08-17T00:44:57.403Z","end":"2021-08-17T00:44:57.507Z","steps":["trace[1559180452] 'process raft request'  (duration: 25.894ms)","trace[1559180452] 'compare'  (duration: 76.6432ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:44:57.532Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"128.3711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-17T00:44:57.532Z","caller":"traceutil/trace.go:171","msg":"trace[1579444204] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:498; }","duration":"128.6348ms","start":"2021-08-17T00:44:57.403Z","end":"2021-08-17T00:44:57.532Z","steps":["trace[1579444204] 'agreement among raft nodes before linearized reading'  (duration: 103.6843ms)","trace[1579444204] 'get authentication metadata'  (duration: 24.6552ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:44:57.546Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"131.6455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:708"}
	{"level":"info","ts":"2021-08-17T00:44:57.546Z","caller":"traceutil/trace.go:171","msg":"trace[5736960] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:498; }","duration":"160.8145ms","start":"2021-08-17T00:44:57.385Z","end":"2021-08-17T00:44:57.546Z","steps":["trace[5736960] 'agreement among raft nodes before linearized reading'  (duration: 125.6888ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-17T00:44:58.871Z","caller":"traceutil/trace.go:171","msg":"trace[2079699221] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"102.1328ms","start":"2021-08-17T00:44:58.769Z","end":"2021-08-17T00:44:58.871Z","steps":["trace[2079699221] 'compare'  (duration: 77.3838ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-17T00:45:05.417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2021-08-17T00:45:05.420Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20210817003608-111344","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2021/08/17 00:45:05 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2021-08-17T00:45:05.645Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2021-08-17T00:45:05.695Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2021-08-17T00:45:05.703Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2021-08-17T00:45:05.707Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20210817003608-111344","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [e01962e6badd] <==
	* {"level":"info","ts":"2021-08-17T00:46:33.230Z","caller":"traceutil/trace.go:171","msg":"trace[1761360565] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:526; }","duration":"269.5939ms","start":"2021-08-17T00:46:32.961Z","end":"2021-08-17T00:46:33.230Z","steps":["trace[1761360565] 'agreement among raft nodes before linearized reading'  (duration: 128.1187ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-17T00:46:33.134Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"175.1906ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-17T00:46:33.232Z","caller":"traceutil/trace.go:171","msg":"trace[1914193079] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:526; }","duration":"272.7797ms","start":"2021-08-17T00:46:32.959Z","end":"2021-08-17T00:46:33.232Z","steps":["trace[1914193079] 'agreement among raft nodes before linearized reading'  (duration: 175.068ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-17T00:46:33.148Z","caller":"traceutil/trace.go:171","msg":"trace[706483103] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:3; response_revision:526; }","duration":"185.6129ms","start":"2021-08-17T00:46:32.962Z","end":"2021-08-17T00:46:33.148Z","steps":["trace[706483103] 'agreement among raft nodes before linearized reading'  (duration: 117.2576ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-17T00:46:33.268Z","caller":"traceutil/trace.go:171","msg":"trace[53436151] transaction","detail":"{read_only:false; response_revision:527; number_of_response:1; }","duration":"117.5447ms","start":"2021-08-17T00:46:33.148Z","end":"2021-08-17T00:46:33.265Z","steps":["trace[53436151] 'process raft request'  (duration: 11.1496ms)","trace[53436151] 'compare'  (duration: 32.6395ms)","trace[53436151] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/leases/kube-node-lease/newest-cni-20210817003608-111344; req_size:616; } (duration: 57.1363ms)"],"step_count":3}
	{"level":"warn","ts":"2021-08-17T00:46:33.379Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"162.6499ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-newest-cni-20210817003608-111344\" ","response":"range_response_count:1 size:7021"}
	{"level":"info","ts":"2021-08-17T00:46:33.379Z","caller":"traceutil/trace.go:171","msg":"trace[1920501255] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-newest-cni-20210817003608-111344; range_end:; response_count:1; response_revision:527; }","duration":"162.7371ms","start":"2021-08-17T00:46:33.216Z","end":"2021-08-17T00:46:33.379Z","steps":["trace[1920501255] 'agreement among raft nodes before linearized reading'  (duration: 51.8261ms)","trace[1920501255] 'range keys from in-memory index tree'  (duration: 110.7737ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:46:33.385Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"162.4288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-20210817003608-111344\" ","response":"range_response_count:1 size:5131"}
	{"level":"info","ts":"2021-08-17T00:46:33.394Z","caller":"traceutil/trace.go:171","msg":"trace[306640160] range","detail":"{range_begin:/registry/minions/newest-cni-20210817003608-111344; range_end:; response_count:1; response_revision:527; }","duration":"171.3639ms","start":"2021-08-17T00:46:33.223Z","end":"2021-08-17T00:46:33.394Z","steps":["trace[306640160] 'agreement among raft nodes before linearized reading'  (duration: 45.0947ms)","trace[306640160] 'range keys from in-memory index tree'  (duration: 117.3019ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:46:33.409Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"189.7107ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/newest-cni-20210817003608-111344\" ","response":"range_response_count:1 size:614"}
	{"level":"info","ts":"2021-08-17T00:46:33.412Z","caller":"traceutil/trace.go:171","msg":"trace[1535382064] range","detail":"{range_begin:/registry/csinodes/newest-cni-20210817003608-111344; range_end:; response_count:1; response_revision:527; }","duration":"194.7617ms","start":"2021-08-17T00:46:33.216Z","end":"2021-08-17T00:46:33.411Z","steps":["trace[1535382064] 'agreement among raft nodes before linearized reading'  (duration: 52.0513ms)","trace[1535382064] 'range keys from in-memory index tree'  (duration: 123.2751ms)","trace[1535382064] 'range keys from bolt db'  (duration: 14.3674ms)"],"step_count":3}
	{"level":"info","ts":"2021-08-17T00:46:35.549Z","caller":"traceutil/trace.go:171","msg":"trace[2066404606] linearizableReadLoop","detail":"{readStateIndex:556; appliedIndex:556; }","duration":"131.3281ms","start":"2021-08-17T00:46:35.417Z","end":"2021-08-17T00:46:35.549Z","steps":["trace[2066404606] 'read index received'  (duration: 131.3175ms)","trace[2066404606] 'applied index is now lower than readState.Index'  (duration: 9.1µs)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:46:35.558Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"140.721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/newest-cni-20210817003608-111344.169bf1724e5b6364\" ","response":"range_response_count:1 size:731"}
	{"level":"info","ts":"2021-08-17T00:46:35.559Z","caller":"traceutil/trace.go:171","msg":"trace[1192011372] range","detail":"{range_begin:/registry/events/default/newest-cni-20210817003608-111344.169bf1724e5b6364; range_end:; response_count:1; response_revision:533; }","duration":"141.3206ms","start":"2021-08-17T00:46:35.417Z","end":"2021-08-17T00:46:35.559Z","steps":["trace[1192011372] 'agreement among raft nodes before linearized reading'  (duration: 140.6715ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-17T00:46:35.672Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"114.7332ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:endpointslicemirroring-controller\" ","response":"range_response_count:1 size:843"}
	{"level":"info","ts":"2021-08-17T00:46:35.672Z","caller":"traceutil/trace.go:171","msg":"trace[1250926333] range","detail":"{range_begin:/registry/clusterroles/system:controller:endpointslicemirroring-controller; range_end:; response_count:1; response_revision:534; }","duration":"114.828ms","start":"2021-08-17T00:46:35.557Z","end":"2021-08-17T00:46:35.672Z","steps":["trace[1250926333] 'agreement among raft nodes before linearized reading'  (duration: 53.9934ms)","trace[1250926333] 'range keys from in-memory index tree'  (duration: 47.3371ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:46:37.089Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.4444ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/newest-cni-20210817003608-111344.169bf1724e5b2994\" ","response":"range_response_count:1 size:737"}
	{"level":"info","ts":"2021-08-17T00:46:37.089Z","caller":"traceutil/trace.go:171","msg":"trace[1907334171] range","detail":"{range_begin:/registry/events/default/newest-cni-20210817003608-111344.169bf1724e5b2994; range_end:; response_count:1; response_revision:541; }","duration":"102.5457ms","start":"2021-08-17T00:46:36.986Z","end":"2021-08-17T00:46:37.089Z","steps":["trace[1907334171] 'agreement among raft nodes before linearized reading'  (duration: 83.6833ms)","trace[1907334171] 'range keys from in-memory index tree'  (duration: 18.276ms)"],"step_count":2}
	{"level":"info","ts":"2021-08-17T00:46:37.357Z","caller":"traceutil/trace.go:171","msg":"trace[307722950] linearizableReadLoop","detail":"{readStateIndex:567; appliedIndex:567; }","duration":"109.8043ms","start":"2021-08-17T00:46:37.247Z","end":"2021-08-17T00:46:37.357Z","steps":["trace[307722950] 'read index received'  (duration: 109.795ms)","trace[307722950] 'applied index is now lower than readState.Index'  (duration: 7.6µs)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:46:37.409Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"176.7049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:namespace-controller\" ","response":"range_response_count:1 size:757"}
	{"level":"info","ts":"2021-08-17T00:46:37.410Z","caller":"traceutil/trace.go:171","msg":"trace[198857094] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:namespace-controller; range_end:; response_count:1; response_revision:544; }","duration":"176.8048ms","start":"2021-08-17T00:46:37.232Z","end":"2021-08-17T00:46:37.409Z","steps":["trace[198857094] 'agreement among raft nodes before linearized reading'  (duration: 132.8038ms)","trace[198857094] 'range keys from in-memory index tree'  (duration: 43.8595ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:46:37.415Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"145.7189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-17T00:46:37.417Z","caller":"traceutil/trace.go:171","msg":"trace[982375475] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:544; }","duration":"148.3423ms","start":"2021-08-17T00:46:37.269Z","end":"2021-08-17T00:46:37.417Z","steps":["trace[982375475] 'agreement among raft nodes before linearized reading'  (duration: 118.7866ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-17T00:46:37.422Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"149.7943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/newest-cni-20210817003608-111344.169bf1724e5b6364\" ","response":"range_response_count:1 size:731"}
	{"level":"info","ts":"2021-08-17T00:46:37.422Z","caller":"traceutil/trace.go:171","msg":"trace[1843020002] range","detail":"{range_begin:/registry/events/default/newest-cni-20210817003608-111344.169bf1724e5b6364; range_end:; response_count:1; response_revision:544; }","duration":"154.348ms","start":"2021-08-17T00:46:37.268Z","end":"2021-08-17T00:46:37.422Z","steps":["trace[1843020002] 'agreement among raft nodes before linearized reading'  (duration: 118.8099ms)","trace[1843020002] 'range keys from in-memory index tree'  (duration: 16.2407ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  00:47:33 up  1:43,  0 users,  load average: 37.14, 28.76, 19.17
	Linux newest-cni-20210817003608-111344 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [28755f53d020] <==
	* W0817 00:45:08.213062       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.215998       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.217234       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.219274       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.233244       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.239044       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.240524       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.244087       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.274940       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.278160       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.279218       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.289250       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.302213       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.308297       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.331264       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.333825       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.339508       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.372571       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.383218       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.387833       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.398467       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.403070       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.403224       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.404464       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.443378       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [f5faeb9a923f] <==
	* I0817 00:46:32.672454       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0817 00:46:32.760078       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0817 00:46:32.773441       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 00:46:32.777097       1 cache.go:39] Caches are synced for autoregister controller
	I0817 00:46:32.783068       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0817 00:46:32.797613       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 00:46:32.859261       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 00:46:32.859481       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 00:46:32.875005       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0817 00:46:33.116113       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 00:46:33.463203       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	W0817 00:46:40.793082       1 handler_proxy.go:104] no RequestInfo found in the context
	E0817 00:46:40.793201       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 00:46:40.793213       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 00:46:43.709700       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 00:46:44.117821       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 00:46:45.158746       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 00:46:45.388329       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 00:46:45.455341       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 00:46:54.920736       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 00:46:56.928571       1 controller.go:611] quota admission added evaluator for: namespaces
	I0817 00:46:57.251424       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0817 00:46:57.947156       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0817 00:46:58.587400       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [8e314399ffbf] <==
	* I0817 00:46:54.934681       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 00:46:54.934719       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 00:46:55.003026       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 00:46:57.293234       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0817 00:46:57.450493       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0817 00:46:57.576580       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.620731       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.621643       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.673641       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 00:46:57.675627       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.676275       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.692037       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.692445       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.712164       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.712879       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.713133       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.713170       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.788171       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 00:46:57.791170       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.793306       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 00:46:57.793404       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 00:46:57.914143       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-smdrj"
	I0817 00:46:57.997601       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-hf47r"
	E0817 00:47:24.614048       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 00:47:25.296080       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-controller-manager [aac407980692] <==
	* I0817 00:44:28.259536       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 00:44:28.275128       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0817 00:44:28.275157       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0817 00:44:28.275172       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0817 00:44:28.275191       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0817 00:44:28.433263       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-newest-cni-20210817003608-111344" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 00:44:29.490142       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0817 00:44:29.857041       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 00:44:29.857081       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 00:44:29.894473       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 00:44:30.010871       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9nj8l"
	I0817 00:44:30.128001       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I0817 00:44:30.453790       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-4rqlg"
	I0817 00:44:30.652384       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-8gr9m"
	I0817 00:44:31.766536       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0817 00:44:31.969456       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-8gr9m"
	I0817 00:44:33.156848       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0817 00:44:56.708209       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0817 00:44:57.011563       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0817 00:44:57.139502       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0817 00:44:57.203165       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0817 00:44:57.203550       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0817 00:44:57.557498       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-vkvfp"
	E0817 00:44:58.648337       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server could not find the requested resource
	W0817 00:45:00.118933       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server could not find the requested resource]
	
	* 
	* ==> kube-proxy [03ba21d79411] <==
	* I0817 00:47:11.934596       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0817 00:47:11.948255       1 server_others.go:140] Detected node IP 192.168.76.2
	W0817 00:47:11.948346       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0817 00:47:12.601553       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 00:47:12.601643       1 server_others.go:212] Using iptables Proxier.
	I0817 00:47:12.601661       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 00:47:12.601711       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 00:47:12.637783       1 server.go:649] Version: v1.22.0-rc.0
	I0817 00:47:12.663345       1 config.go:315] Starting service config controller
	I0817 00:47:12.663425       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 00:47:12.663664       1 config.go:224] Starting endpoint slice config controller
	I0817 00:47:12.663671       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0817 00:47:12.748626       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210817003608-111344.169bf1826aae74b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03edfa4277bc95c, ext:1558288801, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210817003608-111344", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"newest-cni-20210817003608-111344", UID:"newest-cni-20210817003608-111344", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210817003608-111344.169bf1826aae74b0" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0817 00:47:12.769558       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 00:47:12.769653       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [1ad258249d60] <==
	* I0817 00:44:47.824778       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0817 00:44:47.824957       1 server_others.go:140] Detected node IP 192.168.76.2
	W0817 00:44:47.825371       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0817 00:44:49.172138       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 00:44:49.172200       1 server_others.go:212] Using iptables Proxier.
	I0817 00:44:49.172220       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 00:44:49.172257       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 00:44:49.217902       1 server.go:649] Version: v1.22.0-rc.0
	I0817 00:44:49.279820       1 config.go:315] Starting service config controller
	I0817 00:44:49.279889       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 00:44:49.302573       1 config.go:224] Starting endpoint slice config controller
	I0817 00:44:49.302629       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0817 00:44:49.410009       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	E0817 00:44:49.500512       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210817003608-111344.169bf16107717738", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03edf804fb5a1e4, ext:4123142401, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210817003608-111344", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"newest-cni-20210817003608-111344", UID:"newest-cni-20210817003608-111344", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210817003608-111344.169bf16107717738" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0817 00:44:49.582123       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [25fbe133425d] <==
	* E0817 00:44:04.688575       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:04.709320       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:04.713183       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 00:44:05.467160       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 00:44:05.599523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 00:44:05.734810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 00:44:05.749413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:05.760878       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:05.770968       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:05.942726       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:05.976287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 00:44:06.065026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 00:44:06.082831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 00:44:06.088192       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 00:44:06.194083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 00:44:06.248087       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 00:44:06.250149       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 00:44:06.267141       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 00:44:07.400274       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 00:44:07.572424       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:08.738764       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 00:44:13.221460       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0817 00:45:05.310507       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 00:45:05.326066       1 secure_serving.go:301] Stopped listening on 127.0.0.1:10259
	I0817 00:45:05.326122       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [90a9fc57f190] <==
	* W0817 00:46:12.220996       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0817 00:46:17.314325       1 serving.go:347] Generated self-signed cert in-memory
	W0817 00:46:30.776408       1 authentication.go:345] Error looking up in-cluster authentication configuration: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0817 00:46:30.776689       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 00:46:30.776706       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 00:46:32.938391       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 00:46:32.938565       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 00:46:32.945790       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0817 00:46:32.948113       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0817 00:46:33.541038       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 00:45:32 UTC, end at Tue 2021-08-17 00:47:37 UTC. --
	Aug 17 00:47:30 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:30.704339     848 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"fca59cca90cee28b1cf71a788d1beb60404eacc25a612f388bc66fba99b7768b\" network for pod \"coredns-78fcd69978-4rqlg\": networkPlugin cni failed to set up pod \"coredns-78fcd69978-4rqlg_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"fca59cca90cee28b1cf71a788d1beb60404eacc25a612f388bc66fba99b7768b\" network for pod \"coredns-78fcd69978-4rqlg\": networkPlugin cni failed to teardown pod \"coredns-78fcd69978-4rqlg_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-f9577bea733eda86d3723118 -m comment --comment name: \"crio\" id: \"fca59cca90cee28b1cf71a788d1beb60404eacc25a612f388bc66fba99b7768b\" --wait]: exit status 2: iptables v1.8.4 (legacy):
Couldn't load target `CNI-f9577bea733eda86d3723118':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-78fcd69978-4rqlg"
	Aug 17 00:47:30 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:30.706301     848 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-78fcd69978-4rqlg_kube-system(e31d4e8c-dd23-45cf-9a37-aba902e87d97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-78fcd69978-4rqlg_kube-system(e31d4e8c-dd23-45cf-9a37-aba902e87d97)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"fca59cca90cee28b1cf71a788d1beb60404eacc25a612f388bc66fba99b7768b\\\" network for pod \\\"coredns-78fcd69978-4rqlg\\\": networkPlugin cni failed to set up pod \\\"coredns-78fcd69978-4rqlg_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"fca59cca90cee28b1cf71a788d1beb60404eacc25a612f388bc66fba99b7768b\\\" network for pod \\\"coredns-78fcd69978-4rqlg\\\": networkPlugin cni failed to teardown pod \\\"coredns-78fcd69978-4rqlg_kub
e-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-f9577bea733eda86d3723118 -m comment --comment name: \\\"crio\\\" id: \\\"fca59cca90cee28b1cf71a788d1beb60404eacc25a612f388bc66fba99b7768b\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f9577bea733eda86d3723118':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-78fcd69978-4rqlg" podUID=e31d4e8c-dd23-45cf-9a37-aba902e87d97
	Aug 17 00:47:31 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:31.463696     848 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to set up pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to teardown pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-ed1728b82583a7d5152b72a0 -m comment --comment name: \"crio\" id: \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\" --wait]: exit sta
tus 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ed1728b82583a7d5152b72a0':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Aug 17 00:47:31 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:31.464473     848 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to set up pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to teardown pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-ed1728b82583a7d5152b72a0 -m comment --comment name: \"crio\" id: \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\" --wait]: exit status 2
: iptables v1.8.4 (legacy): Couldn't load target `CNI-ed1728b82583a7d5152b72a0':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-7c784ccb57-vkvfp"
	Aug 17 00:47:31 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:31.467204     848 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to set up pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to teardown pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-ed1728b82583a7d5152b72a0 -m comment --comment name: \"crio\" id: \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\" --wait]: exit status 2
: iptables v1.8.4 (legacy): Couldn't load target `CNI-ed1728b82583a7d5152b72a0':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-7c784ccb57-vkvfp"
	Aug 17 00:47:31 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:31.467649     848 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-7c784ccb57-vkvfp_kube-system(9ec6eb01-a852-4f2e-a8bb-0d9888bcf668)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-7c784ccb57-vkvfp_kube-system(9ec6eb01-a852-4f2e-a8bb-0d9888bcf668)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\\\" network for pod \\\"metrics-server-7c784ccb57-vkvfp\\\": networkPlugin cni failed to set up pod \\\"metrics-server-7c784ccb57-vkvfp_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\\\" network for pod \\\"metrics-server-7c784ccb57-vkvfp\\\": networkPlugin cni failed to teardown p
od \\\"metrics-server-7c784ccb57-vkvfp_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-ed1728b82583a7d5152b72a0 -m comment --comment name: \\\"crio\\\" id: \\\"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ed1728b82583a7d5152b72a0':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-7c784ccb57-vkvfp" podUID=9ec6eb01-a852-4f2e-a8bb-0d9888bcf668
	Aug 17 00:47:32 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:32.752559     848 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699"
	Aug 17 00:47:32 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:32.812594     848 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"metrics-server-7c784ccb57-vkvfp_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\""
	Aug 17 00:47:32 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:32.938200     848 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5\""
	Aug 17 00:47:33 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:33.070659     848 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4f9627e4c643ff84f76dc44391c24481fed8843365dbe644c8217b1591dfb81e"
	Aug 17 00:47:33 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:33.070709     848 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="6b91176bd5311f7fa5d393c0aec0c63a5550425f9daeca25cb1f5d0ca47c7518"
	Aug 17 00:47:35 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:35.565127     848 cni.go:361] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-smdrj" podSandboxID={Type:docker ID:b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699} podNetnsPath="/proc/4820/ns/net" networkType="bridge" networkName="crio"
	Aug 17 00:47:35 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:35.596198     848 cni.go:361] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-hf47r" podSandboxID={Type:docker ID:6b91176bd5311f7fa5d393c0aec0c63a5550425f9daeca25cb1f5d0ca47c7518} podNetnsPath="/proc/4828/ns/net" networkType="bridge" networkName="crio"
	Aug 17 00:47:36 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:36.002289     848 cni.go:380] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.16 -j CNI-8bbe8ba02dbc96136c41485f -m comment --comment name: \"crio\" id: \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-8bbe8ba02dbc96136c41485f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-smdrj" podSandboxID={Type:docker ID:b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699} podNetnsPath="/proc/4820/ns/net" networkType="bridge" networkName="crio"
	Aug 17 00:47:36 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:36.218798     848 cni.go:380] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-4e6e59aef352a993f7e38b83 -m comment --comment name: \"crio\" id: \"6b91176bd5311f7fa5d393c0aec0c63a5550425f9daeca25cb1f5d0ca47c7518\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4e6e59aef352a993f7e38b83':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-hf47r" podSandboxID={Type:docker ID:6b91176bd5311f7fa5d393c0aec0c63a5550425f9daeca25cb1f5d0ca47c7518} podNetnsPath="/proc/4828/ns/net" networkType="bridge" networkName="crio"
	Aug 17 00:47:36 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:36.244170     848 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5"
	Aug 17 00:47:36 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:36.252353     848 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8c425c9ba31850bbc590a3dec0534beab35c5de33efbf1badb9ce954e3278178"
	Aug 17 00:47:36 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:36.279844     848 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-78fcd69978-4rqlg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fca59cca90cee28b1cf71a788d1beb60404eacc25a612f388bc66fba99b7768b\""
	Aug 17 00:47:36 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:36.795292     848 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fca59cca90cee28b1cf71a788d1beb60404eacc25a612f388bc66fba99b7768b"
	Aug 17 00:47:36 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:36.854548     848 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"fca59cca90cee28b1cf71a788d1beb60404eacc25a612f388bc66fba99b7768b\""
	Aug 17 00:47:37 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:37.210130     848 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.16 -j CNI-8bbe8ba02dbc96136c41485f -m comment --comment name: \"crio\" id: \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07f
d13ce0083bb5cc7699\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-8bbe8ba02dbc96136c41485f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Aug 17 00:47:37 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:37.215460     848 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.16 -j CNI-8bbe8ba02dbc96136c41485f -m comment --comment name: \"crio\" id: \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce
0083bb5cc7699\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-8bbe8ba02dbc96136c41485f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-smdrj"
	Aug 17 00:47:37 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:37.215880     848 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.16 -j CNI-8bbe8ba02dbc96136c41485f -m comment --comment name: \"crio\" id: \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce
0083bb5cc7699\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-8bbe8ba02dbc96136c41485f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-smdrj"
	Aug 17 00:47:37 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:47:37.227349     848 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard(4e529929-07ad-471c-9d24-fa48b90a186a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard(4e529929-07ad-471c-9d24-fa48b90a186a)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\\\" network for pod \\\"kubernetes-dashboard-6fcdf4f6d-smdrj\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\\\" network for pod \\\"kubernetes-dashboard-6fcdf4f
6d-smdrj\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.16 -j CNI-8bbe8ba02dbc96136c41485f -m comment --comment name: \\\"crio\\\" id: \\\"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-8bbe8ba02dbc96136c41485f':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-smdrj" podUID=4e529929-07ad-471c-9d24-fa48b90a186a
	Aug 17 00:47:37 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:47:37.252639     848 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699\""
	
	* 
	* ==> storage-provisioner [20f37dcaff3d] <==
	* I0817 00:45:01.446404       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 00:45:01.601472       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 00:45:01.601642       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 00:45:01.794648       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 00:45:01.808529       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20210817003608-111344_7a27e747-868d-4099-bfe0-8b47b32c823a!
	I0817 00:45:01.864351       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"168f6965-6e18-4d4f-9617-5b75a0803d8e", APIVersion:"v1", ResourceVersion:"522", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20210817003608-111344_7a27e747-868d-4099-bfe0-8b47b32c823a became leader
	I0817 00:45:02.219933       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20210817003608-111344_7a27e747-868d-4099-bfe0-8b47b32c823a!
	
	* 
	* ==> storage-provisioner [2333add2d120] <==
	* I0817 00:47:06.312269       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0817 00:47:36.327797       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344
E0817 00:47:40.625432  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344: (5.0261894s)
helpers_test.go:262: (dbg) Run:  kubectl --context newest-cni-20210817003608-111344 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-78fcd69978-4rqlg metrics-server-7c784ccb57-vkvfp dashboard-metrics-scraper-8685c45546-hf47r kubernetes-dashboard-6fcdf4f6d-smdrj
helpers_test.go:273: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context newest-cni-20210817003608-111344 describe pod coredns-78fcd69978-4rqlg metrics-server-7c784ccb57-vkvfp dashboard-metrics-scraper-8685c45546-hf47r kubernetes-dashboard-6fcdf4f6d-smdrj
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context newest-cni-20210817003608-111344 describe pod coredns-78fcd69978-4rqlg metrics-server-7c784ccb57-vkvfp dashboard-metrics-scraper-8685c45546-hf47r kubernetes-dashboard-6fcdf4f6d-smdrj: exit status 1 (325.528ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-78fcd69978-4rqlg" not found
	Error from server (NotFound): pods "metrics-server-7c784ccb57-vkvfp" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8685c45546-hf47r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-6fcdf4f6d-smdrj" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context newest-cni-20210817003608-111344 describe pod coredns-78fcd69978-4rqlg metrics-server-7c784ccb57-vkvfp dashboard-metrics-scraper-8685c45546-hf47r kubernetes-dashboard-6fcdf4f6d-smdrj: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect newest-cni-20210817003608-111344

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:236: (dbg) docker inspect newest-cni-20210817003608-111344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb",
	        "Created": "2021-08-17T00:40:30.816513Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274995,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-17T00:45:30.58461Z",
	            "FinishedAt": "2021-08-17T00:45:17.3236145Z"
	        },
	        "Image": "sha256:965ebc74cb435c5d73c204fcd638e9daa526ad4ce3ec7f76be1149f30a789fab",
	        "ResolvConfPath": "/var/lib/docker/containers/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb/hostname",
	        "HostsPath": "/var/lib/docker/containers/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb/hosts",
	        "LogPath": "/var/lib/docker/containers/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb-json.log",
	        "Name": "/newest-cni-20210817003608-111344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-20210817003608-111344:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20210817003608-111344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cd01e88b5bfaeb04bd830f98aea8fbd63038ea29218406dc9dfb87a019607b9e-init/diff:/var/lib/docker/overlay2/e167e57d4b442602b2435f5ffd2147b1da53de34df49d96ce69565867fcf3850/diff:/var/lib/docker/overlay2/dbfef15a73962254d5bcc2c91a409021fc3573c3135096621d707c6f4feaac7d/diff:/var/lib/docker/overlay2/7fc44848dc580276135d9db2b62ce047cfba1909de5e91acbe8c1a5fc8fb3649/diff:/var/lib/docker/overlay2/493996ff2d6a75ef70db2749dded6936397fe536c32e28dda979b8af93e19f13/diff:/var/lib/docker/overlay2/b862553905dec6f42a41351a012fdce386251d97160f74f6b1feb3b455e1f53a/diff:/var/lib/docker/overlay2/517a8b2830d9e81ff950c8305063a6681219abbb7b22f3a87587fa819a0728ed/diff:/var/lib/docker/overlay2/f2b268080cfd9bbb64731ea6b7cb2ec64077e6c2701c2ab6e8b358a541056c5d/diff:/var/lib/docker/overlay2/ee5e612696333c681900cad605a1f678e9114e9c7ecf70717fad21aea1e52992/diff:/var/lib/docker/overlay2/6f44289af0b09a02645c237aabeff61487c57040b9531c0f7bd97517308bfd57/diff:/var/lib/docker/overlay2/f98f67
21a411bacf9d310d4d4405fbd528fa90d60af5ffabda9d55cef9ef3033/diff:/var/lib/docker/overlay2/8bc2e0f6b7c2aeccc6a944f316dbac5672f8685cc5dd5d3c2fc4bd370db4949f/diff:/var/lib/docker/overlay2/ef9e793c1e243004ff088f210369994837eb19a8abd21cf93f75257155445f16/diff:/var/lib/docker/overlay2/48fa7f37fc37f8220a31f4294bc800ef7a33c53c10bdc23d7dc68f27cfe4e535/diff:/var/lib/docker/overlay2/54bc5e0e0c32fdc66ce3eeb345721201a63a0c878d4665607246cd4aa5af61e5/diff:/var/lib/docker/overlay2/398c3fc63254fcc564086ced0eb7211f2d474f8bbdcd43ee27fd609e767c44a6/diff:/var/lib/docker/overlay2/796acb5b93384da004a8065a332cbb07c952569bdd7bb5e551b218e4c5c61f73/diff:/var/lib/docker/overlay2/d90baef87ad95bdfb14a2f35e4cb62336e18c21eb934266f43bfbe017252b857/diff:/var/lib/docker/overlay2/c16752decc8ef06fce4eebdf4ff4725414f3aa80cccd7b3ffdc325095930c0b4/diff:/var/lib/docker/overlay2/a679084eec181b0e1408e573d1ac08c47af1fd8266eb5884bf1a38d5ba0ddbbc/diff:/var/lib/docker/overlay2/15becb79b0d40211562ae33ddc5ec776276b9ae42c8a9f4645dcc6442b36f771/diff:/var/lib/d
ocker/overlay2/068a9a5dce1094eb72788237bd9cda4c76345774d5e647f0af81302a75861f4a/diff:/var/lib/docker/overlay2/74b9e9d807e09734ee96c76bc67adc56c9e3286b39315f89f6747c8c917ad2e5/diff:/var/lib/docker/overlay2/75de8e4895a0b4efe563705c06184db384b5c40154856b9bca2106a8d59fc151/diff:/var/lib/docker/overlay2/cbca3c40b21fee2ef276744168492f17203934aca8de4b459edae2fa55b6bb02/diff:/var/lib/docker/overlay2/584d28a6308bb998bd89d7d92c45b57b9dd66de472d166972d2f5195afd9dd44/diff:/var/lib/docker/overlay2/9c722118749c036eb2d00ba5a6925c5f32b121d64974c99e2de552b26a8bb7cd/diff:/var/lib/docker/overlay2/24908c792743f57c182587c66263f074ed86ae7c5812c631dea82d8ec6650e81/diff:/var/lib/docker/overlay2/9a8de59bfb816b3fc2f0fd522ef966196534483b5e87aafd180dd8b07e9c3582/diff:/var/lib/docker/overlay2/df46d170084213da519dea7e0f402d51272dc10df4d7cd7f37c528c411ac7000/diff:/var/lib/docker/overlay2/36b86a6f515e5882426e598755bb77d43cc340fd20798dfd0a810cd2ab96eeb6/diff:/var/lib/docker/overlay2/b54ac02f70047359cd143a32f862d18498cb556877ccfd252defb9d17fc
9d9f5/diff:/var/lib/docker/overlay2/971b77d080920997e1d0d0936f312a9a322ccd6ab9920c83a8eb5d14b93c3849/diff:/var/lib/docker/overlay2/5b5c21ae360c7e0738c0048bc3fe8d7d3cc0640d266660121f3968f675f42063/diff:/var/lib/docker/overlay2/e07bf2561a99ba47435b8f84b267268e02e9e4ff47832bd5054ee28bb1ca5001/diff:/var/lib/docker/overlay2/0c560be48f01814af21ec54fc79ea5e8db28f05e967a17b331be28ad61c75483/diff:/var/lib/docker/overlay2/27930667f3fd0fd38c13a39c0590c03a2c3b3ba04f0a3c946167be6a40f50c46/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd01e88b5bfaeb04bd830f98aea8fbd63038ea29218406dc9dfb87a019607b9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd01e88b5bfaeb04bd830f98aea8fbd63038ea29218406dc9dfb87a019607b9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd01e88b5bfaeb04bd830f98aea8fbd63038ea29218406dc9dfb87a019607b9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210817003608-111344",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210817003608-111344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210817003608-111344",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210817003608-111344",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210817003608-111344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf4be4471fd151b3d8822ccf0bfaf4290188218d64d0cb47b6ceb41287f90877",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55238"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55237"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55234"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55236"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55235"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cf4be4471fd1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20210817003608-111344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "143a9dd09ce8",
	                        "newest-cni-20210817003608-111344"
	                    ],
	                    "NetworkID": "2e2979479cd3df0ec63a7a5d29fed62692100e75759df90c874233a471610ab5",
	                    "EndpointID": "f3f0d26563739858c0c00e6958045152e637f1dc1d741ffa8f1a6d707b88e1f9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344: (5.6176373s)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-20210817003608-111344 logs -n 25
E0817 00:48:02.508790  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-20210817003608-111344 logs -n 25: (13.8867815s)
helpers_test.go:253: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | embed-certs-20210817002328-111344                | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:28 GMT | Tue, 17 Aug 2021 00:36:33 GMT |
	|         | embed-certs-20210817002328-111344                          |                                                  |                         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20210817002237-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:32 GMT | Tue, 17 Aug 2021 00:36:36 GMT |
	|         | no-preload-20210817002237-111344                           |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |                         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210817002237-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:46 GMT | Tue, 17 Aug 2021 00:37:03 GMT |
	|         | no-preload-20210817002237-111344                           |                                                  |                         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210817002237-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:37:04 GMT | Tue, 17 Aug 2021 00:37:09 GMT |
	|         | no-preload-20210817002237-111344                           |                                                  |                         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:31:07 GMT | Tue, 17 Aug 2021 00:38:22 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                  |                         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker                      |                                                  |                         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |                         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:38:44 GMT | Tue, 17 Aug 2021 00:38:48 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |                         |         |                               |                               |
	| pause   | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:38:49 GMT | Tue, 17 Aug 2021 00:38:53 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |                         |         |                               |                               |
	| unpause | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:39:02 GMT | Tue, 17 Aug 2021 00:39:07 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |                         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:39:16 GMT | Tue, 17 Aug 2021 00:39:34 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210817002733-111344 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:39:35 GMT | Tue, 17 Aug 2021 00:39:40 GMT |
	|         | default-k8s-different-port-20210817002733-111344           |                                                  |                         |         |                               |                               |
	| start   | -p auto-20210817002157-111344                              | auto-20210817002157-111344                       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:34 GMT | Tue, 17 Aug 2021 00:39:52 GMT |
	|         | --memory=2048                                              |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                  |                         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                  |                         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |                         |         |                               |                               |
	| ssh     | -p auto-20210817002157-111344                              | auto-20210817002157-111344                       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:39:52 GMT | Tue, 17 Aug 2021 00:39:55 GMT |
	|         | pgrep -a kubelet                                           |                                                  |                         |         |                               |                               |
	| start   | -p false-20210817002204-111344                             | false-20210817002204-111344                      | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:37:10 GMT | Tue, 17 Aug 2021 00:40:12 GMT |
	|         | --memory=2048                                              |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr --wait=true                              |                                                  |                         |         |                               |                               |
	|         | --wait-timeout=5m --cni=false                              |                                                  |                         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |                         |         |                               |                               |
	| ssh     | -p false-20210817002204-111344                             | false-20210817002204-111344                      | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:40:13 GMT | Tue, 17 Aug 2021 00:40:17 GMT |
	|         | pgrep -a kubelet                                           |                                                  |                         |         |                               |                               |
	| delete  | -p auto-20210817002157-111344                              | auto-20210817002157-111344                       | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:40:19 GMT | Tue, 17 Aug 2021 00:40:41 GMT |
	| delete  | -p false-20210817002204-111344                             | false-20210817002204-111344                      | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:40:49 GMT | Tue, 17 Aug 2021 00:41:08 GMT |
	| start   | -p newest-cni-20210817003608-111344 --memory=2200          | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:36:08 GMT | Tue, 17 Aug 2021 00:44:48 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |                         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |                         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |                         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |                         |         |                               |                               |
	|         | --driver=docker --kubernetes-version=v1.22.0-rc.0          |                                                  |                         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:44:49 GMT | Tue, 17 Aug 2021 00:44:59 GMT |
	|         | newest-cni-20210817003608-111344                           |                                                  |                         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |                         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |                         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:44:59 GMT | Tue, 17 Aug 2021 00:45:18 GMT |
	|         | newest-cni-20210817003608-111344                           |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |                         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:45:21 GMT | Tue, 17 Aug 2021 00:45:23 GMT |
	|         | newest-cni-20210817003608-111344                           |                                                  |                         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |                         |         |                               |                               |
	| start   | -p                                                         | cilium-20210817002204-111344                     | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:39:41 GMT | Tue, 17 Aug 2021 00:46:54 GMT |
	|         | cilium-20210817002204-111344                               |                                                  |                         |         |                               |                               |
	|         | --memory=2048                                              |                                                  |                         |         |                               |                               |
	|         | --alsologtostderr --wait=true                              |                                                  |                         |         |                               |                               |
	|         | --wait-timeout=5m --cni=cilium                             |                                                  |                         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |                         |         |                               |                               |
	| start   | -p newest-cni-20210817003608-111344 --memory=2200          | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:45:23 GMT | Tue, 17 Aug 2021 00:46:58 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |                         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |                         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |                         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |                         |         |                               |                               |
	|         | --driver=docker --kubernetes-version=v1.22.0-rc.0          |                                                  |                         |         |                               |                               |
	| ssh     | -p                                                         | cilium-20210817002204-111344                     | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:47:00 GMT | Tue, 17 Aug 2021 00:47:04 GMT |
	|         | cilium-20210817002204-111344                               |                                                  |                         |         |                               |                               |
	|         | pgrep -a kubelet                                           |                                                  |                         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:47:05 GMT | Tue, 17 Aug 2021 00:47:09 GMT |
	|         | newest-cni-20210817003608-111344                           |                                                  |                         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |                         |         |                               |                               |
	| -p      | newest-cni-20210817003608-111344                           | newest-cni-20210817003608-111344                 | WINDOWS-SERVER-\jenkins | v1.22.0 | Tue, 17 Aug 2021 00:47:22 GMT | Tue, 17 Aug 2021 00:47:38 GMT |
	|         | logs -n 25                                                 |                                                  |                         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/17 00:45:23
	Running on machine: windows-server-2
	Binary: Built with gc go1.16.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 00:45:23.620145   32084 out.go:298] Setting OutFile to fd 4016 ...
	I0817 00:45:23.622181   32084 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:45:23.622307   32084 out.go:311] Setting ErrFile to fd 784...
	I0817 00:45:23.622307   32084 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:45:23.652702   32084 out.go:305] Setting JSON to false
	I0817 00:45:23.656729   32084 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8369170,"bootTime":1620791953,"procs":146,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0817 00:45:23.656920   32084 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0817 00:45:23.659581   32084 out.go:177] * [newest-cni-20210817003608-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0817 00:45:23.660164   32084 notify.go:169] Checking for updates...
	I0817 00:45:23.661916   32084 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:45:23.663473   32084 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0817 00:45:20.847211   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:23.348832   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:22.956548   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:24.995189   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:21.498475   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:24.005799   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:23.665152   32084 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 00:45:23.666019   32084 config.go:177] Loaded profile config "newest-cni-20210817003608-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.0-rc.0
	I0817 00:45:23.672867   32084 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 00:45:25.468346   32084 docker.go:132] docker version: linux-20.10.2
	I0817 00:45:25.474301   32084 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:45:26.395019   32084 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:45:25.964029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] C
lientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:45:26.397542   32084 out.go:177] * Using the docker driver based on existing profile
	I0817 00:45:26.397742   32084 start.go:278] selected driver: docker
	I0817 00:45:26.397742   32084 start.go:751] validating driver "docker" against &{Name:newest-cni-20210817003608-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817003608-111344 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa
:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:45:26.397997   32084 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 00:45:26.507000   32084 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:45:27.309399   32084 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:45:26.9340576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:45:27.310113   32084 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0817 00:45:27.310311   32084 cni.go:93] Creating CNI manager for ""
	I0817 00:45:27.310311   32084 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:45:27.310311   32084 start_flags.go:277] config:
	{Name:newest-cni-20210817003608-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817003608-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTime
out:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:45:27.314944   32084 out.go:177] * Starting control plane node newest-cni-20210817003608-111344 in cluster newest-cni-20210817003608-111344
	I0817 00:45:27.315223   32084 cache.go:117] Beginning downloading kic base image for docker with docker
	I0817 00:45:27.317051   32084 out.go:177] * Pulling base image ...
	I0817 00:45:27.317260   32084 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0817 00:45:27.317260   32084 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 00:45:27.317729   32084 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0817 00:45:27.317729   32084 cache.go:56] Caching tarball of preloaded images
	I0817 00:45:27.318560   32084 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0817 00:45:27.318872   32084 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on docker
	I0817 00:45:27.319099   32084 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\config.json ...
	I0817 00:45:27.842326   32084 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 00:45:27.842326   32084 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 00:45:27.842326   32084 cache.go:205] Successfully downloaded all kic artifacts
	I0817 00:45:27.843068   32084 start.go:313] acquiring machines lock for newest-cni-20210817003608-111344: {Name:mk3f16f02a99d1b37ee77f4ca210722696dca362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:45:27.843390   32084 start.go:317] acquired machines lock for "newest-cni-20210817003608-111344" in 321.7µs
	I0817 00:45:27.843677   32084 start.go:93] Skipping create...Using existing machine configuration
	I0817 00:45:27.843677   32084 fix.go:55] fixHost starting: 
	I0817 00:45:27.865162   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:45:28.335499   32084 fix.go:108] recreateIfNeeded on newest-cni-20210817003608-111344: state=Stopped err=<nil>
	W0817 00:45:28.335499   32084 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 00:45:28.341327   32084 out.go:177] * Restarting existing docker container for "newest-cni-20210817003608-111344" ...
	I0817 00:45:28.345077   32084 cli_runner.go:115] Run: docker start newest-cni-20210817003608-111344
	I0817 00:45:25.353473   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:27.931320   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:27.457012   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:29.608062   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:26.497938   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:28.525322   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:31.001669   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:30.677885   32084 cli_runner.go:168] Completed: docker start newest-cni-20210817003608-111344: (2.3325824s)
	I0817 00:45:30.687699   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:45:31.195879   32084 kic.go:420] container "newest-cni-20210817003608-111344" state is running.
	I0817 00:45:31.197407   32084 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817003608-111344
	I0817 00:45:31.724287   32084 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\config.json ...
	I0817 00:45:31.727922   32084 machine.go:88] provisioning docker machine ...
	I0817 00:45:31.728121   32084 ubuntu.go:169] provisioning hostname "newest-cni-20210817003608-111344"
	I0817 00:45:31.736535   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:32.270727   32084 main.go:130] libmachine: Using SSH client type: native
	I0817 00:45:32.271286   32084 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55238 <nil> <nil>}
	I0817 00:45:32.271286   32084 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210817003608-111344 && echo "newest-cni-20210817003608-111344" | sudo tee /etc/hostname
	I0817 00:45:32.277349   32084 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0817 00:45:30.458438   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:32.859690   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:32.013546   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:34.474976   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:33.011273   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:35.273595   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:35.746881   32084 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210817003608-111344
	
	I0817 00:45:35.750222   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:36.259781   32084 main.go:130] libmachine: Using SSH client type: native
	I0817 00:45:36.260471   32084 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55238 <nil> <nil>}
	I0817 00:45:36.260471   32084 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210817003608-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210817003608-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210817003608-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 00:45:36.576223   32084 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:45:36.576223   32084 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0817 00:45:36.576223   32084 ubuntu.go:177] setting up certificates
	I0817 00:45:36.576430   32084 provision.go:83] configureAuth start
	I0817 00:45:36.588708   32084 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817003608-111344
	I0817 00:45:37.115437   32084 provision.go:138] copyHostCerts
	I0817 00:45:37.116339   32084 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0817 00:45:37.116339   32084 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0817 00:45:37.116896   32084 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0817 00:45:37.118372   32084 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0817 00:45:37.118691   32084 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0817 00:45:37.119048   32084 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0817 00:45:37.120266   32084 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0817 00:45:37.120266   32084 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0817 00:45:37.120505   32084 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0817 00:45:37.121936   32084 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-20210817003608-111344 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210817003608-111344]
	I0817 00:45:37.429759   32084 provision.go:172] copyRemoteCerts
	I0817 00:45:37.436769   32084 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 00:45:37.441755   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:37.945352   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:45:38.160219   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	I0817 00:45:38.254267   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 00:45:38.372888   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 00:45:35.354301   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:37.356117   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:36.986968   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:39.462827   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:37.482575   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:39.492424   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:38.477645   32084 provision.go:86] duration metric: configureAuth took 1.9011422s
	I0817 00:45:38.477645   32084 ubuntu.go:193] setting minikube options for container-runtime
	I0817 00:45:38.478106   32084 config.go:177] Loaded profile config "newest-cni-20210817003608-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.0-rc.0
	I0817 00:45:38.484472   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:39.029909   32084 main.go:130] libmachine: Using SSH client type: native
	I0817 00:45:39.029909   32084 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55238 <nil> <nil>}
	I0817 00:45:39.029909   32084 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0817 00:45:39.370625   32084 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0817 00:45:39.370824   32084 ubuntu.go:71] root file system type: overlay
	I0817 00:45:39.371094   32084 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0817 00:45:39.373185   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:39.926267   32084 main.go:130] libmachine: Using SSH client type: native
	I0817 00:45:39.926874   32084 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55238 <nil> <nil>}
	I0817 00:45:39.927153   32084 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0817 00:45:40.291708   32084 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0817 00:45:40.293212   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:40.828063   32084 main.go:130] libmachine: Using SSH client type: native
	I0817 00:45:40.828409   32084 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55238 <nil> <nil>}
	I0817 00:45:40.828532   32084 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:45:41.166103   32084 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:45:41.166103   32084 machine.go:91] provisioned docker machine in 9.4378218s
	I0817 00:45:41.166103   32084 start.go:267] post-start starting for "newest-cni-20210817003608-111344" (driver="docker")
	I0817 00:45:41.166103   32084 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:45:41.175083   32084 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:45:41.180392   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:41.693159   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:45:41.906370   32084 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:45:41.931079   32084 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:45:41.931079   32084 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:45:41.931079   32084 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:45:41.931079   32084 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 00:45:41.931079   32084 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:45:41.931394   32084 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:45:41.932523   32084 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:45:41.940600   32084 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:45:41.978725   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:45:42.103491   32084 start.go:270] post-start completed in 937.3529ms
	I0817 00:45:42.112129   32084 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:45:42.117806   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:42.593249   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:45:42.772691   32084 fix.go:57] fixHost completed within 14.9284466s
	I0817 00:45:42.773259   32084 start.go:80] releasing machines lock for "newest-cni-20210817003608-111344", held for 14.9293017s
	I0817 00:45:42.774551   32084 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210817003608-111344
	I0817 00:45:43.258004   32084 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 00:45:43.265353   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:43.272183   32084 ssh_runner.go:149] Run: systemctl --version
	I0817 00:45:43.277823   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:39.841246   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:41.849295   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:43.862305   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:41.484380   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:43.494485   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:45.951504   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:41.516323   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:43.984692   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:45.993241   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:43.809205   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:45:43.824145   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:45:44.000599   32084 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0817 00:45:44.140153   32084 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:45:44.244678   32084 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0817 00:45:44.252490   32084 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 00:45:44.309641   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 00:45:44.393457   32084 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0817 00:45:44.872259   32084 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0817 00:45:45.339132   32084 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:45:45.406016   32084 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:45:45.735092   32084 ssh_runner.go:149] Run: sudo systemctl start docker
	I0817 00:45:45.793843   32084 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:45:46.041552   32084 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:45:46.217642   32084 out.go:204] * Preparing Kubernetes v1.22.0-rc.0 on Docker 20.10.8 ...
	I0817 00:45:46.224542   32084 cli_runner.go:115] Run: docker exec -t newest-cni-20210817003608-111344 dig +short host.docker.internal
	I0817 00:45:47.050680   32084 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0817 00:45:47.065057   32084 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0817 00:45:47.081577   32084 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:45:47.143343   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:47.698141   32084 out.go:177]   - kubelet.network-plugin=cni
	I0817 00:45:47.708019   32084 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0817 00:45:47.709930   32084 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0817 00:45:47.717177   32084 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:45:47.921397   32084 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	k8s.gcr.io/etcd:3.4.13-3
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0817 00:45:47.921963   32084 docker.go:466] Images already preloaded, skipping extraction
	I0817 00:45:47.934186   32084 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:45:48.140483   32084 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.0-rc.0
	k8s.gcr.io/kube-scheduler:v1.22.0-rc.0
	k8s.gcr.io/kube-controller-manager:v1.22.0-rc.0
	k8s.gcr.io/kube-proxy:v1.22.0-rc.0
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	k8s.gcr.io/etcd:3.4.13-3
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0817 00:45:48.140796   32084 cache_images.go:74] Images are preloaded, skipping loading
	I0817 00:45:48.151546   32084 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0817 00:45:46.341479   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:48.849648   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:47.956026   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:50.101426   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:48.020256   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:50.536856   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:48.694678   32084 cni.go:93] Creating CNI manager for ""
	I0817 00:45:48.694812   32084 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:45:48.694812   32084 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0817 00:45:48.695037   32084 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210817003608-111344 NodeName:newest-cni-20210817003608-111344 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-ele
ct:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 00:45:48.696015   32084 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20210817003608-111344"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 00:45:48.696862   32084 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210817003608-111344 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817003608-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 00:45:48.713164   32084 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0817 00:45:48.751564   32084 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 00:45:48.764823   32084 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 00:45:48.792735   32084 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (421 bytes)
	I0817 00:45:48.888275   32084 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0817 00:45:48.972202   32084 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0817 00:45:49.037888   32084 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0817 00:45:49.055533   32084 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:45:49.106622   32084 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344 for IP: 192.168.76.2
	I0817 00:45:49.107934   32084 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0817 00:45:49.108969   32084 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0817 00:45:49.111072   32084 certs.go:293] skipping minikube-user signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\client.key
	I0817 00:45:49.112162   32084 certs.go:293] skipping minikube signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\apiserver.key.31bdca25
	I0817 00:45:49.112777   32084 certs.go:293] skipping aggregator signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\proxy-client.key
	I0817 00:45:49.114288   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem (1338 bytes)
	W0817 00:45:49.115023   32084 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344_empty.pem, impossibly tiny 0 bytes
	I0817 00:45:49.115023   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0817 00:45:49.115023   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0817 00:45:49.115023   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0817 00:45:49.115763   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0817 00:45:49.115763   32084 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem (1708 bytes)
	I0817 00:45:49.118447   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 00:45:49.208160   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 00:45:49.289995   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 00:45:49.366389   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210817003608-111344\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 00:45:49.504394   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 00:45:49.604638   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 00:45:49.700921   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 00:45:49.869010   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 00:45:49.975665   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /usr/share/ca-certificates/1113442.pem (1708 bytes)
	I0817 00:45:50.072617   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 00:45:50.171974   32084 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem --> /usr/share/ca-certificates/111344.pem (1338 bytes)
	I0817 00:45:50.274351   32084 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 00:45:50.363361   32084 ssh_runner.go:149] Run: openssl version
	I0817 00:45:50.412784   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1113442.pem && ln -fs /usr/share/ca-certificates/1113442.pem /etc/ssl/certs/1113442.pem"
	I0817 00:45:50.473129   32084 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1113442.pem
	I0817 00:45:50.497015   32084 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 23:23 /usr/share/ca-certificates/1113442.pem
	I0817 00:45:50.505673   32084 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1113442.pem
	I0817 00:45:50.536086   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1113442.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 00:45:50.573222   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 00:45:50.635030   32084 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:45:50.654493   32084 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:45:50.667577   32084 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:45:50.707175   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 00:45:50.754825   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111344.pem && ln -fs /usr/share/ca-certificates/111344.pem /etc/ssl/certs/111344.pem"
	I0817 00:45:50.815779   32084 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/111344.pem
	I0817 00:45:50.841817   32084 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 23:23 /usr/share/ca-certificates/111344.pem
	I0817 00:45:50.853438   32084 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111344.pem
	I0817 00:45:50.897266   32084 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/111344.pem /etc/ssl/certs/51391683.0"
	I0817 00:45:50.926628   32084 kubeadm.go:390] StartCluster: {Name:newest-cni-20210817003608-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210817003608-111344 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false ku
belet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:45:50.934714   32084 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:45:51.099709   32084 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 00:45:51.129885   32084 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0817 00:45:51.130346   32084 kubeadm.go:600] restartCluster start
	I0817 00:45:51.143768   32084 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0817 00:45:51.179983   32084 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:51.191763   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:45:51.748434   32084 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210817003608-111344" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:45:51.749549   32084 kubeconfig.go:128] "newest-cni-20210817003608-111344" context is missing from C:\Users\jenkins\minikube-integration\kubeconfig - will repair!
	I0817 00:45:51.751035   32084 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:45:51.787206   32084 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0817 00:45:51.829105   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:51.836963   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:51.887764   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:52.089010   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:52.099000   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:52.171337   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:52.288281   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:52.296326   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:52.349113   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:52.488435   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:52.497714   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:52.551953   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:52.687965   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:52.704517   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:52.765053   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:52.888068   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:52.895815   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:52.946201   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:53.088727   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:53.099855   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:53.152272   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:53.288751   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:53.296904   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:53.363566   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:50.917071   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:53.374128   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:52.522371   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:54.978723   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:53.012282   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:55.492296   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:53.491963   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:53.498651   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:53.563947   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:53.689686   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:53.696729   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:53.759784   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:53.887952   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:53.896348   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:53.939449   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.088197   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.104444   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:54.173323   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.288922   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.306176   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:54.381983   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.487959   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.496118   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:54.554163   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.688716   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.698795   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:54.784270   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.888205   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.896555   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:54.954971   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:54.955114   32084 api_server.go:164] Checking apiserver status ...
	I0817 00:45:54.963789   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0817 00:45:55.024404   32084 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:55.024536   32084 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0817 00:45:55.024536   32084 kubeadm.go:1032] stopping kube-system containers ...
	I0817 00:45:55.031833   32084 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:45:55.230911   32084 docker.go:367] Stopping containers: [3f898bd77d20 f2fc204cb173 20f37dcaff3d 6c2da5a2baca 1ad258249d60 dba5f9da8cf1 9a8baa9115fe aac407980692 25fbe133425d 28755f53d020 1737f02b01d3 41d3f9624cd5 11d218d1e749 00e51ba67e2f 75b1c13f00ae]
	I0817 00:45:55.235924   32084 ssh_runner.go:149] Run: docker stop 3f898bd77d20 f2fc204cb173 20f37dcaff3d 6c2da5a2baca 1ad258249d60 dba5f9da8cf1 9a8baa9115fe aac407980692 25fbe133425d 28755f53d020 1737f02b01d3 41d3f9624cd5 11d218d1e749 00e51ba67e2f 75b1c13f00ae
	I0817 00:45:55.438973   32084 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0817 00:45:55.536943   32084 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 00:45:55.566535   32084 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 17 00:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 17 00:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Aug 17 00:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 17 00:43 /etc/kubernetes/scheduler.conf
	
	I0817 00:45:55.566535   32084 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0817 00:45:55.629779   32084 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0817 00:45:55.682971   32084 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0817 00:45:55.716675   32084 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:55.728977   32084 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0817 00:45:55.761792   32084 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0817 00:45:55.806664   32084 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0817 00:45:55.821245   32084 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0817 00:45:55.875139   32084 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 00:45:55.915980   32084 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0817 00:45:55.915980   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:45:56.347060   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:45:55.837502   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:58.342115   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:56.998560   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:59.072434   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:45:57.540287   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:00.038186   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:00.743298   32084 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (4.3960708s)
	I0817 00:46:00.743298   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:46:01.482415   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:46:01.934354   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:46:02.439388   32084 api_server.go:50] waiting for apiserver process to appear ...
	I0817 00:46:02.448554   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:03.049143   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:00.835758   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:03.345507   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:01.497628   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:03.978476   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:06.008050   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:02.518845   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:04.519344   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:03.548680   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:04.055022   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:04.550028   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:05.041585   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:05.556288   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:06.050716   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:06.550166   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:07.052075   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:07.551559   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:08.052353   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:05.880341   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:07.904858   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:08.501085   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:07.040502   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:09.527921   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:08.553281   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:09.051310   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:09.553828   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:10.051378   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:10.549209   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:11.050319   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:11.549585   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:12.049932   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:12.549007   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:10.350111   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:12.362842   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:11.161033   73600 pod_ready.go:102] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:12.022480   73600 pod_ready.go:92] pod "cilium-zt4nw" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:12.022641   73600 pod_ready.go:81] duration metric: took 54.157539s waiting for pod "cilium-zt4nw" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:12.022641   73600 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-5kk5g" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:14.182822   73600 pod_ready.go:102] pod "coredns-558bd4d5db-5kk5g" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:15.706393   73600 pod_ready.go:92] pod "coredns-558bd4d5db-5kk5g" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:15.706657   73600 pod_ready.go:81] duration metric: took 3.6838761s waiting for pod "coredns-558bd4d5db-5kk5g" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.706657   73600 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-cvwp2" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.725079   73600 pod_ready.go:97] error getting pod "coredns-558bd4d5db-cvwp2" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-cvwp2" not found
	I0817 00:46:15.725079   73600 pod_ready.go:81] duration metric: took 18.4211ms waiting for pod "coredns-558bd4d5db-cvwp2" in "kube-system" namespace to be "Ready" ...
	E0817 00:46:15.725079   73600 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-cvwp2" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-cvwp2" not found
	I0817 00:46:15.725079   73600 pod_ready.go:78] waiting up to 5m0s for pod "etcd-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.778381   73600 pod_ready.go:92] pod "etcd-cilium-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:15.778381   73600 pod_ready.go:81] duration metric: took 53.3005ms waiting for pod "etcd-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.778381   73600 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.808209   73600 pod_ready.go:92] pod "kube-apiserver-cilium-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:15.808209   73600 pod_ready.go:81] duration metric: took 29.8269ms waiting for pod "kube-apiserver-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.808209   73600 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.836016   73600 pod_ready.go:92] pod "kube-controller-manager-cilium-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:15.836016   73600 pod_ready.go:81] duration metric: took 27.8061ms waiting for pod "kube-controller-manager-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.836016   73600 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-mjrwl" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.868094   73600 pod_ready.go:92] pod "kube-proxy-mjrwl" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:15.868345   73600 pod_ready.go:81] duration metric: took 32.0758ms waiting for pod "kube-proxy-mjrwl" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:15.868345   73600 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:11.998831   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:14.524593   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:13.611184   32084 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.0621368s)
	I0817 00:46:14.053183   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:14.702043   32084 api_server.go:70] duration metric: took 12.262189s to wait for apiserver process to appear ...
	I0817 00:46:14.702043   32084 api_server.go:86] waiting for apiserver healthz status ...
	I0817 00:46:14.702277   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:14.836191   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:16.850275   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:16.245972   73600 pod_ready.go:92] pod "kube-scheduler-cilium-20210817002204-111344" in "kube-system" namespace has status "Ready":"True"
	I0817 00:46:16.245972   73600 pod_ready.go:81] duration metric: took 377.6128ms waiting for pod "kube-scheduler-cilium-20210817002204-111344" in "kube-system" namespace to be "Ready" ...
	I0817 00:46:16.245972   73600 pod_ready.go:38] duration metric: took 3m43.0608939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 00:46:16.245972   73600 api_server.go:50] waiting for apiserver process to appear ...
	I0817 00:46:16.254642   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0817 00:46:17.179098   73600 logs.go:270] 1 containers: [3e5f0181aa79]
	I0817 00:46:17.186930   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0817 00:46:17.970933   73600 logs.go:270] 1 containers: [f6eb6c2452d6]
	I0817 00:46:17.984433   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0817 00:46:18.730136   73600 logs.go:270] 1 containers: [7f3c95d6335f]
	I0817 00:46:18.743186   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0817 00:46:19.113184   73600 logs.go:270] 1 containers: [b87de0ae0f76]
	I0817 00:46:19.116872   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0817 00:46:19.433891   73600 logs.go:270] 1 containers: [fa25c8fed512]
	I0817 00:46:19.440720   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0817 00:46:19.781074   73600 logs.go:270] 0 containers: []
	W0817 00:46:19.781074   73600 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 00:46:19.788121   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0817 00:46:20.055550   73600 logs.go:270] 2 containers: [00638b764dd3 4306a97290a5]
	I0817 00:46:20.062273   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0817 00:46:20.267469   73600 logs.go:270] 1 containers: [60b439d9ae55]
	I0817 00:46:20.267469   73600 logs.go:123] Gathering logs for kube-scheduler [b87de0ae0f76] ...
	I0817 00:46:20.267469   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 b87de0ae0f76"
	I0817 00:46:20.607372   73600 logs.go:123] Gathering logs for kube-proxy [fa25c8fed512] ...
	I0817 00:46:20.607372   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 fa25c8fed512"
	I0817 00:46:20.814801   73600 logs.go:123] Gathering logs for Docker ...
	I0817 00:46:20.815022   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0817 00:46:20.949194   73600 logs.go:123] Gathering logs for container status ...
	I0817 00:46:20.949280   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 00:46:17.008876   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:19.479603   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:19.704506   32084 api_server.go:255] stopped: https://127.0.0.1:55235/healthz: Get "https://127.0.0.1:55235/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 00:46:20.205864   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:19.331549   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:21.347652   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:23.357877   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:21.347652   73600 logs.go:123] Gathering logs for kubelet ...
	I0817 00:46:21.347652   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 00:46:21.794338   73600 logs.go:123] Gathering logs for dmesg ...
	I0817 00:46:21.794338   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 00:46:21.879924   73600 logs.go:123] Gathering logs for describe nodes ...
	I0817 00:46:21.879924   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 00:46:23.152324   73600 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1.2723515s)
	I0817 00:46:23.159724   73600 logs.go:123] Gathering logs for etcd [f6eb6c2452d6] ...
	I0817 00:46:23.160039   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f6eb6c2452d6"
	I0817 00:46:23.661559   73600 logs.go:123] Gathering logs for kube-controller-manager [60b439d9ae55] ...
	I0817 00:46:23.661789   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 60b439d9ae55"
	I0817 00:46:23.932868   73600 logs.go:123] Gathering logs for kube-apiserver [3e5f0181aa79] ...
	I0817 00:46:23.932868   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 3e5f0181aa79"
	I0817 00:46:24.272047   73600 logs.go:123] Gathering logs for coredns [7f3c95d6335f] ...
	I0817 00:46:24.272047   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 7f3c95d6335f"
	I0817 00:46:24.737143   73600 logs.go:123] Gathering logs for storage-provisioner [00638b764dd3] ...
	I0817 00:46:24.737376   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 00638b764dd3"
	I0817 00:46:25.048961   73600 logs.go:123] Gathering logs for storage-provisioner [4306a97290a5] ...
	I0817 00:46:25.048961   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 4306a97290a5"
	I0817 00:46:21.526574   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:24.004490   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:25.207878   32084 api_server.go:255] stopped: https://127.0.0.1:55235/healthz: Get "https://127.0.0.1:55235/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 00:46:25.705788   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:25.841847   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:27.862155   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:27.888584   73600 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:28.065581   73600 api_server.go:70] duration metric: took 3m57.8992528s to wait for apiserver process to appear ...
	I0817 00:46:28.065581   73600 api_server.go:86] waiting for apiserver healthz status ...
	I0817 00:46:28.075727   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0817 00:46:28.391871   73600 logs.go:270] 1 containers: [3e5f0181aa79]
	I0817 00:46:28.398456   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0817 00:46:28.613745   73600 logs.go:270] 1 containers: [f6eb6c2452d6]
	I0817 00:46:28.620248   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0817 00:46:28.989655   73600 logs.go:270] 1 containers: [7f3c95d6335f]
	I0817 00:46:28.998091   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0817 00:46:29.204892   73600 logs.go:270] 1 containers: [b87de0ae0f76]
	I0817 00:46:29.212715   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0817 00:46:29.511393   73600 logs.go:270] 1 containers: [fa25c8fed512]
	I0817 00:46:29.521072   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0817 00:46:29.984646   73600 logs.go:270] 0 containers: []
	W0817 00:46:29.984748   73600 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 00:46:29.989855   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0817 00:46:30.477193   73600 logs.go:270] 2 containers: [00638b764dd3 4306a97290a5]
	I0817 00:46:30.485331   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0817 00:46:30.722292   73600 logs.go:270] 1 containers: [60b439d9ae55]
	I0817 00:46:30.723041   73600 logs.go:123] Gathering logs for dmesg ...
	I0817 00:46:30.723041   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 00:46:30.862539   73600 logs.go:123] Gathering logs for etcd [f6eb6c2452d6] ...
	I0817 00:46:30.862539   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f6eb6c2452d6"
	I0817 00:46:26.648679   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:29.012238   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:30.706602   32084 api_server.go:255] stopped: https://127.0.0.1:55235/healthz: Get "https://127.0.0.1:55235/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0817 00:46:31.207000   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:32.625953   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0817 00:46:32.626349   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0817 00:46:32.706862   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:32.775262   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0817 00:46:32.776090   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0817 00:46:33.207542   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:30.369003   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:32.387168   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:31.832925   73600 logs.go:123] Gathering logs for coredns [7f3c95d6335f] ...
	I0817 00:46:31.832925   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 7f3c95d6335f"
	I0817 00:46:32.415612   73600 logs.go:123] Gathering logs for kube-scheduler [b87de0ae0f76] ...
	I0817 00:46:32.415612   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 b87de0ae0f76"
	I0817 00:46:32.795088   73600 logs.go:123] Gathering logs for storage-provisioner [00638b764dd3] ...
	I0817 00:46:32.795088   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 00638b764dd3"
	I0817 00:46:33.577947   73600 logs.go:123] Gathering logs for kube-controller-manager [60b439d9ae55] ...
	I0817 00:46:33.577947   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 60b439d9ae55"
	I0817 00:46:34.494677   73600 logs.go:123] Gathering logs for kubelet ...
	I0817 00:46:34.494677   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 00:46:34.895045   73600 logs.go:123] Gathering logs for describe nodes ...
	I0817 00:46:34.895045   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 00:46:31.490316   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:33.504774   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:35.523525   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:33.458527   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:33.459399   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:33.707362   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:33.766395   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:33.766395   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:34.207244   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:34.323362   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:34.323362   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:34.706703   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:34.874066   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:34.874066   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:35.206367   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:35.598099   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:35.598099   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:35.705988   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:35.801628   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:35.801628   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:36.207589   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:36.272876   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:36.272876   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:36.706430   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:36.773297   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:36.773627   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:37.205763   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:37.426173   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:37.426590   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:37.706882   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:37.765031   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:37.765595   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:38.205941   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:34.405921   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:36.835838   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:38.884593   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:36.745082   73600 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1.8499671s)
	I0817 00:46:36.749479   73600 logs.go:123] Gathering logs for kube-apiserver [3e5f0181aa79] ...
	I0817 00:46:36.749479   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 3e5f0181aa79"
	I0817 00:46:37.499547   73600 logs.go:123] Gathering logs for kube-proxy [fa25c8fed512] ...
	I0817 00:46:37.499547   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 fa25c8fed512"
	I0817 00:46:37.908879   73600 logs.go:123] Gathering logs for storage-provisioner [4306a97290a5] ...
	I0817 00:46:37.908879   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 4306a97290a5"
	I0817 00:46:38.209224   73600 logs.go:123] Gathering logs for Docker ...
	I0817 00:46:38.209224   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0817 00:46:38.329477   73600 logs.go:123] Gathering logs for container status ...
	I0817 00:46:38.329477   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 00:46:37.990935   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:40.486594   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:38.389726   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:38.397465   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:38.712244   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:38.791985   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:38.793011   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:39.206377   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:39.306382   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:39.307513   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:39.706101   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:39.765359   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0817 00:46:39.765674   32084 api_server.go:101] status: https://127.0.0.1:55235/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0817 00:46:40.212070   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:40.248228   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 200:
	ok
	I0817 00:46:40.308529   32084 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 00:46:40.308666   32084 api_server.go:129] duration metric: took 25.605513s to wait for apiserver health ...
	I0817 00:46:40.308666   32084 cni.go:93] Creating CNI manager for ""
	I0817 00:46:40.308666   32084 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0817 00:46:40.308864   32084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 00:46:40.375820   32084 system_pods.go:59] 8 kube-system pods found
	I0817 00:46:40.375820   32084 system_pods.go:61] "coredns-78fcd69978-4rqlg" [e31d4e8c-dd23-45cf-9a37-aba902e87d97] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 00:46:40.375820   32084 system_pods.go:61] "etcd-newest-cni-20210817003608-111344" [43d91330-4f5d-46ac-aef5-352c59424787] Running
	I0817 00:46:40.375820   32084 system_pods.go:61] "kube-apiserver-newest-cni-20210817003608-111344" [b6d309fd-9aa2-45b7-aab0-caa42b6e983c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0817 00:46:40.375820   32084 system_pods.go:61] "kube-controller-manager-newest-cni-20210817003608-111344" [8d7f557b-d69d-4017-a638-ec780cd4ccf3] Running
	I0817 00:46:40.375820   32084 system_pods.go:61] "kube-proxy-9nj8l" [de7a7f83-5225-4d60-9fba-e7b0c120247f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 00:46:40.375820   32084 system_pods.go:61] "kube-scheduler-newest-cni-20210817003608-111344" [199c0871-b83b-4083-8f3a-05523bb205dd] Running
	I0817 00:46:40.375820   32084 system_pods.go:61] "metrics-server-7c784ccb57-vkvfp" [9ec6eb01-a852-4f2e-a8bb-0d9888bcf668] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 00:46:40.375820   32084 system_pods.go:61] "storage-provisioner" [af23beac-6b23-4a97-9b39-7db56aa9f154] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 00:46:40.375820   32084 system_pods.go:74] duration metric: took 66.954ms to wait for pod list to return data ...
	I0817 00:46:40.376494   32084 node_conditions.go:102] verifying NodePressure condition ...
	I0817 00:46:40.410975   32084 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0817 00:46:40.411230   32084 node_conditions.go:123] node cpu capacity is 4
	I0817 00:46:40.411230   32084 node_conditions.go:105] duration metric: took 34.7342ms to run NodePressure ...
	I0817 00:46:40.411230   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0817 00:46:41.388882   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:43.509723   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:45.559702   32084 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (5.1481321s)
	I0817 00:46:45.560825   32084 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 00:46:45.686116   32084 ops.go:34] apiserver oom_adj: -16
	I0817 00:46:45.686210   32084 kubeadm.go:604] restartCluster took 54.5535543s
	I0817 00:46:45.686210   32084 kubeadm.go:392] StartCluster complete in 54.7575014s
	I0817 00:46:45.686378   32084 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:46:45.686701   32084 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:46:45.699752   32084 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:46:45.803723   32084 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210817003608-111344" rescaled to 1
	I0817 00:46:45.804161   32084 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 00:46:45.804324   32084 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0817 00:46:45.804161   32084 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0817 00:46:45.804580   32084 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210817003608-111344"
	I0817 00:46:45.804580   32084 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210817003608-111344"
	I0817 00:46:45.804580   32084 addons.go:59] Setting dashboard=true in profile "newest-cni-20210817003608-111344"
	I0817 00:46:45.804681   32084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210817003608-111344"
	I0817 00:46:45.804864   32084 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210817003608-111344"
	W0817 00:46:45.804864   32084 addons.go:147] addon storage-provisioner should already be in state true
	I0817 00:46:45.804864   32084 addons.go:135] Setting addon dashboard=true in "newest-cni-20210817003608-111344"
	I0817 00:46:41.291462   73600 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55215/healthz ...
	I0817 00:46:41.363328   73600 api_server.go:265] https://127.0.0.1:55215/healthz returned 200:
	ok
	I0817 00:46:41.372925   73600 api_server.go:139] control plane version: v1.21.3
	I0817 00:46:41.373110   73600 api_server.go:129] duration metric: took 13.3070227s to wait for apiserver health ...
	I0817 00:46:41.373110   73600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 00:46:41.373333   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0817 00:46:42.070778   73600 logs.go:270] 1 containers: [3e5f0181aa79]
	I0817 00:46:42.079041   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0817 00:46:42.548958   73600 logs.go:270] 1 containers: [f6eb6c2452d6]
	I0817 00:46:42.556190   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0817 00:46:42.958008   73600 logs.go:270] 1 containers: [7f3c95d6335f]
	I0817 00:46:42.965538   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0817 00:46:43.436064   73600 logs.go:270] 1 containers: [b87de0ae0f76]
	I0817 00:46:43.442581   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0817 00:46:43.720813   73600 logs.go:270] 1 containers: [fa25c8fed512]
	I0817 00:46:43.727210   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0817 00:46:44.346327   73600 logs.go:270] 0 containers: []
	W0817 00:46:44.346515   73600 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0817 00:46:44.353865   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0817 00:46:44.778502   73600 logs.go:270] 2 containers: [00638b764dd3 4306a97290a5]
	I0817 00:46:44.784792   73600 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0817 00:46:45.235819   73600 logs.go:270] 1 containers: [60b439d9ae55]
	I0817 00:46:45.235819   73600 logs.go:123] Gathering logs for kube-scheduler [b87de0ae0f76] ...
	I0817 00:46:45.235819   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 b87de0ae0f76"
	I0817 00:46:45.750111   73600 logs.go:123] Gathering logs for kube-proxy [fa25c8fed512] ...
	I0817 00:46:45.750276   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 fa25c8fed512"
	I0817 00:46:42.996270   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:45.010647   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:45.806645   32084 out.go:177] * Verifying Kubernetes components...
	I0817 00:46:45.804681   32084 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210817003608-111344"
	W0817 00:46:45.804864   32084 addons.go:147] addon dashboard should already be in state true
	I0817 00:46:45.805166   32084 host.go:66] Checking if "newest-cni-20210817003608-111344" exists ...
	I0817 00:46:45.805665   32084 config.go:177] Loaded profile config "newest-cni-20210817003608-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.0-rc.0
	I0817 00:46:45.806645   32084 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210817003608-111344"
	W0817 00:46:45.806645   32084 addons.go:147] addon metrics-server should already be in state true
	I0817 00:46:45.807318   32084 host.go:66] Checking if "newest-cni-20210817003608-111344" exists ...
	I0817 00:46:45.807318   32084 host.go:66] Checking if "newest-cni-20210817003608-111344" exists ...
	I0817 00:46:45.819240   32084 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 00:46:45.828044   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:46:45.829061   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:46:45.831847   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:46:45.834894   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:46:46.537061   32084 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0817 00:46:46.537645   32084 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 00:46:46.537645   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0817 00:46:46.543937   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:46:46.550789   32084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:46:46.550789   32084 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:46:46.550789   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 00:46:46.558996   32084 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0817 00:46:46.557994   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:46:46.561044   32084 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0817 00:46:46.561044   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0817 00:46:46.561044   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0817 00:46:46.567008   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:46:46.719798   32084 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210817003608-111344"
	W0817 00:46:46.719934   32084 addons.go:147] addon default-storageclass should already be in state true
	I0817 00:46:46.726163   32084 host.go:66] Checking if "newest-cni-20210817003608-111344" exists ...
	I0817 00:46:46.737914   32084 cli_runner.go:115] Run: docker container inspect newest-cni-20210817003608-111344 --format={{.State.Status}}
	I0817 00:46:47.127096   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:46:47.138097   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:46:47.148226   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:46:47.314097   32084 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 00:46:47.314228   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 00:46:47.325366   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:46:47.827549   32084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55238 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210817003608-111344\id_rsa Username:docker}
	I0817 00:46:45.853324   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:47.854527   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:46.485646   73600 logs.go:123] Gathering logs for storage-provisioner [00638b764dd3] ...
	I0817 00:46:46.485646   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 00638b764dd3"
	I0817 00:46:47.452760   73600 logs.go:123] Gathering logs for storage-provisioner [4306a97290a5] ...
	I0817 00:46:47.452878   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 4306a97290a5"
	I0817 00:46:47.807624   73600 logs.go:123] Gathering logs for container status ...
	I0817 00:46:47.807624   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0817 00:46:48.155130   73600 logs.go:123] Gathering logs for kubelet ...
	I0817 00:46:48.155334   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0817 00:46:48.529780   73600 logs.go:123] Gathering logs for describe nodes ...
	I0817 00:46:48.530783   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0817 00:46:49.969401   73600 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1.4385634s)
	I0817 00:46:49.974005   73600 logs.go:123] Gathering logs for coredns [7f3c95d6335f] ...
	I0817 00:46:49.974118   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 7f3c95d6335f"
	I0817 00:46:50.253902   73600 logs.go:123] Gathering logs for kube-controller-manager [60b439d9ae55] ...
	I0817 00:46:50.253902   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 60b439d9ae55"
	I0817 00:46:50.703420   73600 logs.go:123] Gathering logs for Docker ...
	I0817 00:46:50.703420   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0817 00:46:50.884512   73600 logs.go:123] Gathering logs for dmesg ...
	I0817 00:46:50.884512   73600 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0817 00:46:50.983627   73600 logs.go:123] Gathering logs for kube-apiserver [3e5f0181aa79] ...
	I0817 00:46:50.983838   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 3e5f0181aa79"
	I0817 00:46:47.286051   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:49.518431   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:48.853882   32084 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (3.0345266s)
	I0817 00:46:48.854093   32084 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (3.0496047s)
	I0817 00:46:48.854974   32084 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0817 00:46:48.860925   32084 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210817003608-111344
	I0817 00:46:49.072190   32084 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 00:46:49.072190   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0817 00:46:49.356794   32084 api_server.go:50] waiting for apiserver process to appear ...
	I0817 00:46:49.365400   32084 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 00:46:49.435495   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0817 00:46:49.436485   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0817 00:46:49.499337   32084 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:46:49.517915   32084 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 00:46:49.518128   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0817 00:46:49.654719   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0817 00:46:49.654719   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0817 00:46:49.678455   32084 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 00:46:49.782692   32084 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 00:46:49.782692   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0817 00:46:50.379506   32084 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 00:46:50.578199   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0817 00:46:50.578199   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0817 00:46:50.794101   32084 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.4286462s)
	I0817 00:46:50.794202   32084 api_server.go:70] duration metric: took 4.9895225s to wait for apiserver process to appear ...
	I0817 00:46:50.794202   32084 api_server.go:86] waiting for apiserver healthz status ...
	I0817 00:46:50.794202   32084 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55235/healthz ...
	I0817 00:46:50.832268   32084 api_server.go:265] https://127.0.0.1:55235/healthz returned 200:
	ok
	I0817 00:46:50.839314   32084 api_server.go:139] control plane version: v1.22.0-rc.0
	I0817 00:46:50.839314   32084 api_server.go:129] duration metric: took 45.1101ms to wait for apiserver health ...
	I0817 00:46:50.839314   32084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 00:46:50.912069   32084 system_pods.go:59] 8 kube-system pods found
	I0817 00:46:50.912199   32084 system_pods.go:61] "coredns-78fcd69978-4rqlg" [e31d4e8c-dd23-45cf-9a37-aba902e87d97] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0817 00:46:50.912199   32084 system_pods.go:61] "etcd-newest-cni-20210817003608-111344" [43d91330-4f5d-46ac-aef5-352c59424787] Running
	I0817 00:46:50.912199   32084 system_pods.go:61] "kube-apiserver-newest-cni-20210817003608-111344" [b6d309fd-9aa2-45b7-aab0-caa42b6e983c] Running
	I0817 00:46:50.912199   32084 system_pods.go:61] "kube-controller-manager-newest-cni-20210817003608-111344" [8d7f557b-d69d-4017-a638-ec780cd4ccf3] Running
	I0817 00:46:50.912199   32084 system_pods.go:61] "kube-proxy-9nj8l" [de7a7f83-5225-4d60-9fba-e7b0c120247f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0817 00:46:50.912199   32084 system_pods.go:61] "kube-scheduler-newest-cni-20210817003608-111344" [199c0871-b83b-4083-8f3a-05523bb205dd] Running
	I0817 00:46:50.912199   32084 system_pods.go:61] "metrics-server-7c784ccb57-vkvfp" [9ec6eb01-a852-4f2e-a8bb-0d9888bcf668] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0817 00:46:50.912199   32084 system_pods.go:61] "storage-provisioner" [af23beac-6b23-4a97-9b39-7db56aa9f154] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0817 00:46:50.912199   32084 system_pods.go:74] duration metric: took 72.8823ms to wait for pod list to return data ...
	I0817 00:46:50.912199   32084 default_sa.go:34] waiting for default service account to be created ...
	I0817 00:46:50.939211   32084 default_sa.go:45] found service account: "default"
	I0817 00:46:50.939211   32084 default_sa.go:55] duration metric: took 27.0106ms for default service account to be created ...
	I0817 00:46:50.939211   32084 kubeadm.go:547] duration metric: took 5.1345255s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0817 00:46:50.939456   32084 node_conditions.go:102] verifying NodePressure condition ...
	I0817 00:46:50.965726   32084 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0817 00:46:50.965726   32084 node_conditions.go:123] node cpu capacity is 4
	I0817 00:46:50.965726   32084 node_conditions.go:105] duration metric: took 26.2689ms to run NodePressure ...
	I0817 00:46:50.965726   32084 start.go:231] waiting for startup goroutines ...
	I0817 00:46:51.391185   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0817 00:46:51.391185   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0817 00:46:52.136529   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0817 00:46:52.136529   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0817 00:46:52.879752   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0817 00:46:52.879752   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0817 00:46:49.873540   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:52.420534   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:51.245221   73600 logs.go:123] Gathering logs for etcd [f6eb6c2452d6] ...
	I0817 00:46:51.245221   73600 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f6eb6c2452d6"
	I0817 00:46:54.268623   73600 system_pods.go:59] 9 kube-system pods found
	I0817 00:46:54.268764   73600 system_pods.go:61] "cilium-operator-99d899fb5-47tqd" [282c6ed0-512e-4527-8abd-c20b109a3ab5] Running
	I0817 00:46:54.268764   73600 system_pods.go:61] "cilium-zt4nw" [e6d28534-126f-46ed-a6f4-4f547e173b18] Running
	I0817 00:46:54.268764   73600 system_pods.go:61] "coredns-558bd4d5db-5kk5g" [b9fac283-fb2e-4da6-882b-f1e25b1a063f] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "etcd-cilium-20210817002204-111344" [0c40eb71-82aa-45bd-80d2-de25bb50aa30] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "kube-apiserver-cilium-20210817002204-111344" [c8d0631b-2b14-4310-81a8-ea94e8ef2a3f] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "kube-controller-manager-cilium-20210817002204-111344" [76bc8068-f9f7-44dc-b298-93ac3f8cce97] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "kube-proxy-mjrwl" [2c253bdb-59d9-4892-bbc7-900370c9783d] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "kube-scheduler-cilium-20210817002204-111344" [849b2404-ace6-4909-9c24-4842549362b8] Running
	I0817 00:46:54.268860   73600 system_pods.go:61] "storage-provisioner" [8cfa9260-52d1-4533-aa01-7d71b7565697] Running
	I0817 00:46:54.268860   73600 system_pods.go:74] duration metric: took 12.8952597s to wait for pod list to return data ...
	I0817 00:46:54.268860   73600 default_sa.go:34] waiting for default service account to be created ...
	I0817 00:46:54.272898   73600 default_sa.go:45] found service account: "default"
	I0817 00:46:54.272898   73600 default_sa.go:55] duration metric: took 4.0387ms for default service account to be created ...
	I0817 00:46:54.272898   73600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 00:46:54.308094   73600 system_pods.go:86] 9 kube-system pods found
	I0817 00:46:54.308201   73600 system_pods.go:89] "cilium-operator-99d899fb5-47tqd" [282c6ed0-512e-4527-8abd-c20b109a3ab5] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "cilium-zt4nw" [e6d28534-126f-46ed-a6f4-4f547e173b18] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "coredns-558bd4d5db-5kk5g" [b9fac283-fb2e-4da6-882b-f1e25b1a063f] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "etcd-cilium-20210817002204-111344" [0c40eb71-82aa-45bd-80d2-de25bb50aa30] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "kube-apiserver-cilium-20210817002204-111344" [c8d0631b-2b14-4310-81a8-ea94e8ef2a3f] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "kube-controller-manager-cilium-20210817002204-111344" [76bc8068-f9f7-44dc-b298-93ac3f8cce97] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "kube-proxy-mjrwl" [2c253bdb-59d9-4892-bbc7-900370c9783d] Running
	I0817 00:46:54.308201   73600 system_pods.go:89] "kube-scheduler-cilium-20210817002204-111344" [849b2404-ace6-4909-9c24-4842549362b8] Running
	I0817 00:46:54.308303   73600 system_pods.go:89] "storage-provisioner" [8cfa9260-52d1-4533-aa01-7d71b7565697] Running
	I0817 00:46:54.308303   73600 system_pods.go:126] duration metric: took 35.4035ms to wait for k8s-apps to be running ...
	I0817 00:46:54.308395   73600 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 00:46:54.316134   73600 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 00:46:54.411712   73600 system_svc.go:56] duration metric: took 103.3128ms WaitForService to wait for kubelet.
	I0817 00:46:54.411864   73600 kubeadm.go:547] duration metric: took 4m24.2443824s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 00:46:54.411864   73600 node_conditions.go:102] verifying NodePressure condition ...
	I0817 00:46:54.420626   73600 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0817 00:46:54.420731   73600 node_conditions.go:123] node cpu capacity is 4
	I0817 00:46:54.420731   73600 node_conditions.go:105] duration metric: took 8.8669ms to run NodePressure ...
	I0817 00:46:54.420826   73600 start.go:231] waiting for startup goroutines ...
	I0817 00:46:54.607082   73600 start.go:462] kubectl: 1.20.0, cluster: 1.21.3 (minor skew: 1)
	I0817 00:46:54.609311   73600 out.go:177] * Done! kubectl is now configured to use "cilium-20210817002204-111344" cluster and "default" namespace by default
	I0817 00:46:51.527655   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:53.532789   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:55.996605   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:53.568801   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0817 00:46:53.568801   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0817 00:46:53.672945   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0817 00:46:53.672945   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0817 00:46:53.953306   32084 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 00:46:53.953443   32084 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0817 00:46:54.511731   32084 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0817 00:46:55.182612   32084 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.6830586s)
	I0817 00:46:55.188979   32084 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.5103147s)
	I0817 00:46:55.829821   32084 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.4501077s)
	I0817 00:46:55.829821   32084 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210817003608-111344"
	I0817 00:46:58.767442   32084 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.2555493s)
	I0817 00:46:58.769937   32084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0817 00:46:58.770219   32084 addons.go:344] enableAddons completed in 12.9655654s
	I0817 00:46:58.919436   32084 start.go:462] kubectl: 1.20.0, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0817 00:46:58.927300   32084 out.go:177] 
	W0817 00:46:58.927563   32084 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0817 00:46:58.929374   32084 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0817 00:46:58.931225   32084 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210817003608-111344" cluster and "default" namespace by default
	I0817 00:46:54.877671   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:57.352071   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:58.020700   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:00.026526   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:46:59.367541   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:01.852774   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:03.864430   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:02.517865   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:04.986466   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:06.340126   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:08.371110   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:07.059721   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:09.533454   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:10.853228   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:13.438052   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:12.007117   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:14.014736   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:15.863766   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:18.371799   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:16.513999   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:18.573976   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:21.013036   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:20.454429   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:22.839333   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:23.036479   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:25.507501   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:24.854225   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:27.335737   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:28.005629   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:30.495498   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:29.349653   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:31.363950   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:33.853927   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:33.013319   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:35.526865   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:36.362096   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:38.860834   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:38.141181   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:41.663433   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:41.700456   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:44.023765   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:46.026459   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:44.157742   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:46.356831   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:48.835606   59296 pod_ready.go:102] pod "coredns-558bd4d5db-xnqd6" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:48.722168   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	I0817 00:47:51.048717   56104 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-gtb6k" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2021-08-17 00:45:32 UTC, end at Tue 2021-08-17 00:47:59 UTC. --
	Aug 17 00:45:35 newest-cni-20210817003608-111344 systemd[1]: Started Docker Application Container Engine.
	Aug 17 00:45:35 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:35.256494900Z" level=info msg="API listen on [::]:2376"
	Aug 17 00:45:35 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:45:35.282827000Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 17 00:46:51 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:46:51.588664900Z" level=info msg="ignoring event" container=3bfef131d1b1f12f04d417d281804d41ec5102023e2def746de638cf3b080afb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:46:52 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:46:52.018776900Z" level=info msg="ignoring event" container=07dad6a063d0f793a01c019f42cb72df251e320b5837881ebc2f18d6e5d4e202 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:46:59 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:46:59.301831600Z" level=info msg="ignoring event" container=f9dcbc7254771bbc72ff6f191cf9f076caf32f5d0808b9fbbc6db3438eb4799c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:02 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:02.187379300Z" level=info msg="ignoring event" container=8af96648d391d00e142cd622831c0404356e6a3e02d40e41d1e668eefa23e4aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:06 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:06.738851300Z" level=info msg="ignoring event" container=f4aacd3d844a6f76a85bffd1e77f606e1b4b56ac6fbb4bf89e4e0f524eb3352d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:08 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:08.621499400Z" level=info msg="ignoring event" container=580b18e7b3a2e91b3e510c7485e65413ec7a8e8047c86bd5dacc44ea99ac8d82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:09 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:09.396246400Z" level=info msg="ignoring event" container=dff9c779de811385e6299833728dc4cefa70859d314ba7e65509a181c578393b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:15 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:15.012088400Z" level=info msg="ignoring event" container=3aeab7968c57c3e917a7e763720ad196010cd81506d99bd82e8f4ebd535baed0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:20 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:20.420655700Z" level=info msg="ignoring event" container=c073b9be55aed2ad08d5398d7570bc0434e302669f76f96e3dd696d10d6b6e25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:22 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:22.006288200Z" level=info msg="ignoring event" container=e611ec2d84688b3925132b85a4323c3bba873c1fabb203e4990e8b23e29cc5d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:25 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:25.876752500Z" level=info msg="ignoring event" container=4f9627e4c643ff84f76dc44391c24481fed8843365dbe644c8217b1591dfb81e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:30 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:30.168873700Z" level=info msg="ignoring event" container=fca59cca90cee28b1cf71a788d1beb60404eacc25a612f388bc66fba99b7768b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:31 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:31.034287700Z" level=info msg="ignoring event" container=269468db6d2462fac00115800e0e7796638393130943d376041ad1cbb4921ba5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:36 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:36.494053500Z" level=info msg="ignoring event" container=b8ee2ec885de5552dbbaf92bce05904d3ec0d22b2aa07fd13ce0083bb5cc7699 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:37 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:37.097348200Z" level=info msg="ignoring event" container=2333add2d120ccd04c2806f66429cf15a9fa8110a46b8d74c082271aca877f4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:37 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:37.425756000Z" level=info msg="ignoring event" container=6b91176bd5311f7fa5d393c0aec0c63a5550425f9daeca25cb1f5d0ca47c7518 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:39 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:39.261611700Z" level=info msg="ignoring event" container=8c425c9ba31850bbc590a3dec0534beab35c5de33efbf1badb9ce954e3278178 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:43 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:43.404053800Z" level=info msg="ignoring event" container=c00792ea23880cdbd477ae8191c7d8b3318ce68501c40a4726899cdee611f5fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:51 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:50.997552000Z" level=info msg="ignoring event" container=14cb6664c505a74b9640ba1088eb06670a4fbaea1bbaa88f90ced00dbe46b366 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:52 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:52.061235000Z" level=info msg="ignoring event" container=cfbf246eb9a0c6c0f5289a680ffc15becf1678214d33434545bbc152899cd871 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:52 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:52.410007300Z" level=info msg="ignoring event" container=7a48b35a5e740068841485bb478c0b3d1ada8a68efe181642f2c5c9f26398a63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 17 00:47:56 newest-cni-20210817003608-111344 dockerd[214]: time="2021-08-17T00:47:56.886591600Z" level=info msg="ignoring event" container=ed48c01f6f64d942f7b0bb99581e9490a588c1ff99cb8b27680e5230ff3d2420 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	03ba21d79411d       ea6b13ed84e03       51 seconds ago       Running             kube-proxy                1                   0c0bd04ab0c04
	2333add2d120c       6e38f40d628db       56 seconds ago       Exited              storage-provisioner       1                   9182ee79e2b41
	f5faeb9a923fd       b2462aa94d403       About a minute ago   Running             kube-apiserver            1                   27d16d443fe31
	e01962e6badd5       0048118155842       About a minute ago   Running             etcd                      1                   f57cd0e8764a4
	8e314399ffbfa       cf9cba6c3e4a8       About a minute ago   Running             kube-controller-manager   1                   ba4ab5e4b6029
	90a9fc57f1904       7da2efaa5b480       About a minute ago   Running             kube-scheduler            1                   7b1abe2d483fa
	1ad258249d601       ea6b13ed84e03       3 minutes ago        Exited              kube-proxy                0                   9a8baa9115fed
	aac407980692a       cf9cba6c3e4a8       4 minutes ago        Exited              kube-controller-manager   0                   41d3f9624cd5c
	25fbe133425d8       7da2efaa5b480       4 minutes ago        Exited              kube-scheduler            0                   75b1c13f00ae4
	28755f53d020c       b2462aa94d403       4 minutes ago        Exited              kube-apiserver            0                   11d218d1e7490
	1737f02b01d38       0048118155842       4 minutes ago        Exited              etcd                      0                   00e51ba67e2f8
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20210817003608-111344
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20210817003608-111344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=newest-cni-20210817003608-111344
	                    minikube.k8s.io/updated_at=2021_08_17T00_44_14_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Aug 2021 00:44:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20210817003608-111344
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Aug 2021 00:47:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Aug 2021 00:46:34 +0000   Tue, 17 Aug 2021 00:44:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Aug 2021 00:46:34 +0000   Tue, 17 Aug 2021 00:44:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Aug 2021 00:46:34 +0000   Tue, 17 Aug 2021 00:44:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Aug 2021 00:46:34 +0000   Tue, 17 Aug 2021 00:44:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-20210817003608-111344
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 dfc5def84a78402c9caa00a7cad25a86
	  System UUID:                815fae9c-df15-489f-a826-e5f5275d966a
	  Boot ID:                    59d49a8b-044c-440e-a1d3-94e728b56235
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.22.0-rc.0
	  Kube-Proxy Version:         v1.22.0-rc.0
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-4rqlg                                    100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m31s
	  kube-system                 etcd-newest-cni-20210817003608-111344                       100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-apiserver-newest-cni-20210817003608-111344             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-controller-manager-newest-cni-20210817003608-111344    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-proxy-9nj8l                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 kube-scheduler-newest-cni-20210817003608-111344             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 metrics-server-7c784ccb57-vkvfp                             100m (2%!)(MISSING)     0 (0%!)(MISSING)      300Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m4s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-hf47r                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-smdrj                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (21%!)(MISSING)  0 (0%!)(MISSING)
	  memory             470Mi (2%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From     Message
	  ----    ------                   ----                   ----     -------
	  Normal  NodeHasSufficientPID     4m24s (x7 over 4m25s)  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m25s)  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m25s)  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 3m44s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m43s                  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s                  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s                  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m43s                  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m39s                  kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m30s                  kubelet  Node newest-cni-20210817003608-111344 status is now: NodeReady
	  Normal  Starting                 118s                   kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  117s                   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  116s (x8 over 118s)    kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 118s)    kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 118s)    kubelet  Node newest-cni-20210817003608-111344 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.000044]  hv_stimer0_isr+0x20/0x2d
	[  +0.000053]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000021]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000002]  </IRQ>
	[  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	[  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 6e ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 6e 31 ff e8
	[  +0.000001] RSP: 0018:ffffb51d800a3ec8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	[  +0.000002] RAX: ffffffff91918b30 RBX: 0000000000000001 RCX: ffffffff92253150
	[  +0.000001] RDX: 0000000000171622 RSI: 0000000000000001 RDI: 0000000000000001
	[  +0.000001] RBP: 0000000000000000 R08: 0000007cfc1104b2 R09: 0000000000000002
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: ffff8d162e19ef80 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002]  ? __sched_text_end+0x1/0x1
	[  +0.000021]  ? native_safe_halt+0x5/0x8
	[  +0.000002]  default_idle+0x1b/0x2c
	[  +0.000003]  do_idle+0xe5/0x216
	[  +0.000003]  cpu_startup_entry+0x6f/0x71
	[  +0.000019]  start_secondary+0x18e/0x1a9
	[  +0.000032]  secondary_startup_64+0xa4/0xb0
	[  +0.000020] ---[ end trace b7d34331c4afdfb9 ]---
	[Aug17 00:14] tee (131347): /proc/127190/oom_adj is deprecated, please use /proc/127190/oom_score_adj instead.
	[Aug17 00:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000007] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.100196] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [1737f02b01d3] <==
	* {"level":"info","ts":"2021-08-17T00:43:45.714Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-17T00:43:45.714Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-17T00:43:45.719Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-17T00:43:45.714Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-17T00:43:45.728Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-17T00:43:45.772Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2021-08-17T00:43:45.712Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20210817003608-111344 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-17T00:44:29.158Z","caller":"traceutil/trace.go:171","msg":"trace[736617236] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"184.6827ms","start":"2021-08-17T00:44:28.971Z","end":"2021-08-17T00:44:29.155Z","steps":["trace[736617236] 'process raft request'  (duration: 173.0452ms)","trace[736617236] 'compare'  (duration: 11.5305ms)"],"step_count":2}
	{"level":"info","ts":"2021-08-17T00:44:29.150Z","caller":"traceutil/trace.go:171","msg":"trace[789087407] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"176.3015ms","start":"2021-08-17T00:44:28.971Z","end":"2021-08-17T00:44:29.147Z","steps":["trace[789087407] 'process raft request'  (duration: 81.6486ms)","trace[789087407] 'compare'  (duration: 89.8943ms)"],"step_count":2}
	{"level":"info","ts":"2021-08-17T00:44:56.894Z","caller":"traceutil/trace.go:171","msg":"trace[139588176] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"151.3941ms","start":"2021-08-17T00:44:56.743Z","end":"2021-08-17T00:44:56.894Z","steps":["trace[139588176] 'process raft request'  (duration: 129.8582ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-17T00:44:56.945Z","caller":"traceutil/trace.go:171","msg":"trace[252463044] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"201.9073ms","start":"2021-08-17T00:44:56.743Z","end":"2021-08-17T00:44:56.945Z","steps":["trace[252463044] 'process raft request'  (duration: 167.5164ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-17T00:44:57.506Z","caller":"traceutil/trace.go:171","msg":"trace[132183243] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:512; }","duration":"100.3325ms","start":"2021-08-17T00:44:57.406Z","end":"2021-08-17T00:44:57.506Z","steps":["trace[132183243] 'read index received'  (duration: 21.2019ms)","trace[132183243] 'applied index is now lower than readState.Index'  (duration: 79.129ms)"],"step_count":2}
	{"level":"info","ts":"2021-08-17T00:44:57.507Z","caller":"traceutil/trace.go:171","msg":"trace[1559180452] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"103.3911ms","start":"2021-08-17T00:44:57.403Z","end":"2021-08-17T00:44:57.507Z","steps":["trace[1559180452] 'process raft request'  (duration: 25.894ms)","trace[1559180452] 'compare'  (duration: 76.6432ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:44:57.532Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"128.3711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-17T00:44:57.532Z","caller":"traceutil/trace.go:171","msg":"trace[1579444204] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:498; }","duration":"128.6348ms","start":"2021-08-17T00:44:57.403Z","end":"2021-08-17T00:44:57.532Z","steps":["trace[1579444204] 'agreement among raft nodes before linearized reading'  (duration: 103.6843ms)","trace[1579444204] 'get authentication metadata'  (duration: 24.6552ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:44:57.546Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"131.6455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:708"}
	{"level":"info","ts":"2021-08-17T00:44:57.546Z","caller":"traceutil/trace.go:171","msg":"trace[5736960] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:498; }","duration":"160.8145ms","start":"2021-08-17T00:44:57.385Z","end":"2021-08-17T00:44:57.546Z","steps":["trace[5736960] 'agreement among raft nodes before linearized reading'  (duration: 125.6888ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-17T00:44:58.871Z","caller":"traceutil/trace.go:171","msg":"trace[2079699221] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"102.1328ms","start":"2021-08-17T00:44:58.769Z","end":"2021-08-17T00:44:58.871Z","steps":["trace[2079699221] 'compare'  (duration: 77.3838ms)"],"step_count":1}
	{"level":"info","ts":"2021-08-17T00:45:05.417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2021-08-17T00:45:05.420Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20210817003608-111344","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2021/08/17 00:45:05 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2021-08-17T00:45:05.645Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2021-08-17T00:45:05.695Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2021-08-17T00:45:05.703Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2021-08-17T00:45:05.707Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20210817003608-111344","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [e01962e6badd] <==
	* {"level":"warn","ts":"2021-08-17T00:46:37.089Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.4444ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/newest-cni-20210817003608-111344.169bf1724e5b2994\" ","response":"range_response_count:1 size:737"}
	{"level":"info","ts":"2021-08-17T00:46:37.089Z","caller":"traceutil/trace.go:171","msg":"trace[1907334171] range","detail":"{range_begin:/registry/events/default/newest-cni-20210817003608-111344.169bf1724e5b2994; range_end:; response_count:1; response_revision:541; }","duration":"102.5457ms","start":"2021-08-17T00:46:36.986Z","end":"2021-08-17T00:46:37.089Z","steps":["trace[1907334171] 'agreement among raft nodes before linearized reading'  (duration: 83.6833ms)","trace[1907334171] 'range keys from in-memory index tree'  (duration: 18.276ms)"],"step_count":2}
	{"level":"info","ts":"2021-08-17T00:46:37.357Z","caller":"traceutil/trace.go:171","msg":"trace[307722950] linearizableReadLoop","detail":"{readStateIndex:567; appliedIndex:567; }","duration":"109.8043ms","start":"2021-08-17T00:46:37.247Z","end":"2021-08-17T00:46:37.357Z","steps":["trace[307722950] 'read index received'  (duration: 109.795ms)","trace[307722950] 'applied index is now lower than readState.Index'  (duration: 7.6µs)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:46:37.409Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"176.7049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:namespace-controller\" ","response":"range_response_count:1 size:757"}
	{"level":"info","ts":"2021-08-17T00:46:37.410Z","caller":"traceutil/trace.go:171","msg":"trace[198857094] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:namespace-controller; range_end:; response_count:1; response_revision:544; }","duration":"176.8048ms","start":"2021-08-17T00:46:37.232Z","end":"2021-08-17T00:46:37.409Z","steps":["trace[198857094] 'agreement among raft nodes before linearized reading'  (duration: 132.8038ms)","trace[198857094] 'range keys from in-memory index tree'  (duration: 43.8595ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:46:37.415Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"145.7189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-17T00:46:37.417Z","caller":"traceutil/trace.go:171","msg":"trace[982375475] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:544; }","duration":"148.3423ms","start":"2021-08-17T00:46:37.269Z","end":"2021-08-17T00:46:37.417Z","steps":["trace[982375475] 'agreement among raft nodes before linearized reading'  (duration: 118.7866ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-17T00:46:37.422Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"149.7943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/newest-cni-20210817003608-111344.169bf1724e5b6364\" ","response":"range_response_count:1 size:731"}
	{"level":"info","ts":"2021-08-17T00:46:37.422Z","caller":"traceutil/trace.go:171","msg":"trace[1843020002] range","detail":"{range_begin:/registry/events/default/newest-cni-20210817003608-111344.169bf1724e5b6364; range_end:; response_count:1; response_revision:544; }","duration":"154.348ms","start":"2021-08-17T00:46:37.268Z","end":"2021-08-17T00:46:37.422Z","steps":["trace[1843020002] 'agreement among raft nodes before linearized reading'  (duration: 118.8099ms)","trace[1843020002] 'range keys from in-memory index tree'  (duration: 16.2407ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:47:40.964Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638322346374210334,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2021-08-17T00:47:41.465Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638322346374210334,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2021-08-17T00:47:41.642Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.1981494s","expected-duration":"1s"}
	{"level":"warn","ts":"2021-08-17T00:47:41.645Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.244331s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-17T00:47:41.648Z","caller":"traceutil/trace.go:171","msg":"trace[937139446] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:725; }","duration":"1.2473221s","start":"2021-08-17T00:47:40.401Z","end":"2021-08-17T00:47:41.648Z","steps":["trace[937139446] 'range keys from in-memory index tree'  (duration: 1.2438946s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-17T00:47:41.648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-17T00:47:40.401Z","time spent":"1.2474288s","remote":"127.0.0.1:51448","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2021-08-17T00:47:41.666Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.2187631s","expected-duration":"100ms","prefix":"","request":"header:<ID:15638322346374210335 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.169bf188e20b73cc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.169bf188e20b73cc\" value_size:608 lease:6414950309519434462 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2021-08-17T00:47:41.666Z","caller":"traceutil/trace.go:171","msg":"trace[1599847633] linearizableReadLoop","detail":"{readStateIndex:776; appliedIndex:775; }","duration":"1.2026722s","start":"2021-08-17T00:47:40.463Z","end":"2021-08-17T00:47:41.666Z","steps":["trace[1599847633] 'read index received'  (duration: 1.179869s)","trace[1599847633] 'applied index is now lower than readState.Index'  (duration: 22.8016ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:47:41.666Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"541.5315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-08-17T00:47:41.666Z","caller":"traceutil/trace.go:171","msg":"trace[1290298747] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:726; }","duration":"541.5807ms","start":"2021-08-17T00:47:41.124Z","end":"2021-08-17T00:47:41.666Z","steps":["trace[1290298747] 'agreement among raft nodes before linearized reading'  (duration: 541.4838ms)"],"step_count":1}
	{"level":"warn","ts":"2021-08-17T00:47:41.666Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-17T00:47:41.124Z","time spent":"541.7044ms","remote":"127.0.0.1:51448","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2021-08-17T00:47:41.668Z","caller":"traceutil/trace.go:171","msg":"trace[470476992] transaction","detail":"{read_only:false; response_revision:726; number_of_response:1; }","duration":"1.2242546s","start":"2021-08-17T00:47:40.443Z","end":"2021-08-17T00:47:41.668Z","steps":["trace[470476992] 'compare'  (duration: 1.1997059s)","trace[470476992] 'marshal mvccpb.KeyValue' {req_type:put; key:/registry/events/kube-system/storage-provisioner.169bf188e20b73cc; req_size:688; } (duration: 18.9335ms)"],"step_count":2}
	{"level":"warn","ts":"2021-08-17T00:47:41.668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-17T00:47:40.443Z","time spent":"1.2243898s","remote":"127.0.0.1:51442","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":691,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.169bf188e20b73cc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.169bf188e20b73cc\" value_size:608 lease:6414950309519434462 >> failure:<>"}
	{"level":"warn","ts":"2021-08-17T00:47:41.668Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.20527s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3699"}
	{"level":"info","ts":"2021-08-17T00:47:41.668Z","caller":"traceutil/trace.go:171","msg":"trace[671774373] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:726; }","duration":"1.20532s","start":"2021-08-17T00:47:40.463Z","end":"2021-08-17T00:47:41.668Z","steps":["trace[671774373] 'agreement among raft nodes before linearized reading'  (duration: 1.2052217s)"],"step_count":1}
	{"level":"warn","ts":"2021-08-17T00:47:41.668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-08-17T00:47:40.463Z","time spent":"1.2053729s","remote":"127.0.0.1:51478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":3722,"request content":"key:\"/registry/pods/kube-system/storage-provisioner\" "}
	
	* 
	* ==> kernel <==
	*  00:48:01 up  1:43,  0 users,  load average: 35.14, 29.16, 19.62
	Linux newest-cni-20210817003608-111344 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [28755f53d020] <==
	* W0817 00:45:08.213062       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.215998       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.217234       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.219274       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.233244       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.239044       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.240524       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.244087       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.274940       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.278160       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.279218       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.289250       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.302213       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.308297       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.331264       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.333825       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.339508       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.372571       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.383218       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.387833       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.398467       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.403070       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.403224       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.404464       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0817 00:45:08.443378       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [f5faeb9a923f] <==
	* I0817 00:46:33.463203       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	W0817 00:46:40.793082       1 handler_proxy.go:104] no RequestInfo found in the context
	E0817 00:46:40.793201       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 00:46:40.793213       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 00:46:43.709700       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0817 00:46:44.117821       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0817 00:46:45.158746       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0817 00:46:45.388329       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 00:46:45.455341       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 00:46:54.920736       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 00:46:56.928571       1 controller.go:611] quota admission added evaluator for: namespaces
	I0817 00:46:57.251424       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0817 00:46:57.947156       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0817 00:46:58.587400       1 controller.go:611] quota admission added evaluator for: endpoints
	W0817 00:47:40.796274       1 handler_proxy.go:104] no RequestInfo found in the context
	E0817 00:47:40.796629       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 00:47:40.796651       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 00:47:41.670595       1 trace.go:205] Trace[456757783]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:ab15c8aa-f784-4e17-a9db-a8922b8fa2a0,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (17-Aug-2021 00:47:40.439) (total time: 1230ms):
	Trace[456757783]: ---"Object stored in database" 1229ms (00:47:41.669)
	Trace[456757783]: [1.2309701s] [1.2309701s] END
	I0817 00:47:41.684779       1 trace.go:205] Trace[1514084966]: "Get" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubelet/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:e30a59ac-43da-4f1d-8ff2-16817fb0c6d4,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (17-Aug-2021 00:47:40.462) (total time: 1221ms):
	Trace[1514084966]: ---"About to write a response" 1216ms (00:47:41.679)
	Trace[1514084966]: [1.2215656s] [1.2215656s] END
	
	* 
	* ==> kube-controller-manager [8e314399ffbf] <==
	* I0817 00:46:55.003026       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 00:46:57.293234       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0817 00:46:57.450493       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0817 00:46:57.576580       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.620731       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.621643       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.673641       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 00:46:57.675627       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.676275       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.692037       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.692445       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.712164       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.712879       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.713133       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.713170       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0817 00:46:57.788171       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0817 00:46:57.791170       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0817 00:46:57.793306       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 00:46:57.793404       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0817 00:46:57.914143       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-smdrj"
	I0817 00:46:57.997601       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-hf47r"
	E0817 00:47:24.614048       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 00:47:25.296080       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0817 00:47:54.759701       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0817 00:47:55.431151       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-controller-manager [aac407980692] <==
	* I0817 00:44:28.259536       1 shared_informer.go:247] Caches are synced for resource quota 
	I0817 00:44:28.275128       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0817 00:44:28.275157       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0817 00:44:28.275172       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0817 00:44:28.275191       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0817 00:44:28.433263       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-newest-cni-20210817003608-111344" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 00:44:29.490142       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0817 00:44:29.857041       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 00:44:29.857081       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 00:44:29.894473       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0817 00:44:30.010871       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9nj8l"
	I0817 00:44:30.128001       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I0817 00:44:30.453790       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-4rqlg"
	I0817 00:44:30.652384       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-8gr9m"
	I0817 00:44:31.766536       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0817 00:44:31.969456       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-8gr9m"
	I0817 00:44:33.156848       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0817 00:44:56.708209       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0817 00:44:57.011563       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0817 00:44:57.139502       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0817 00:44:57.203165       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0817 00:44:57.203550       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	I0817 00:44:57.557498       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-vkvfp"
	E0817 00:44:58.648337       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server could not find the requested resource
	W0817 00:45:00.118933       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server could not find the requested resource]
	
	* 
	* ==> kube-proxy [03ba21d79411] <==
	* I0817 00:47:11.934596       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0817 00:47:11.948255       1 server_others.go:140] Detected node IP 192.168.76.2
	W0817 00:47:11.948346       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0817 00:47:12.601553       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 00:47:12.601643       1 server_others.go:212] Using iptables Proxier.
	I0817 00:47:12.601661       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 00:47:12.601711       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 00:47:12.637783       1 server.go:649] Version: v1.22.0-rc.0
	I0817 00:47:12.663345       1 config.go:315] Starting service config controller
	I0817 00:47:12.663425       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 00:47:12.663664       1 config.go:224] Starting endpoint slice config controller
	I0817 00:47:12.663671       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0817 00:47:12.748626       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210817003608-111344.169bf1826aae74b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03edfa4277bc95c, ext:1558288801, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210817003608-111344", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"newest-cni-20210817003608-111344", UID:"newest-cni-20210817003608-111344", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210817003608-111344.169bf1826aae74b0" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0817 00:47:12.769558       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0817 00:47:12.769653       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [1ad258249d60] <==
	* I0817 00:44:47.824778       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0817 00:44:47.824957       1 server_others.go:140] Detected node IP 192.168.76.2
	W0817 00:44:47.825371       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0817 00:44:49.172138       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0817 00:44:49.172200       1 server_others.go:212] Using iptables Proxier.
	I0817 00:44:49.172220       1 server_others.go:219] creating dualStackProxier for iptables.
	W0817 00:44:49.172257       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0817 00:44:49.217902       1 server.go:649] Version: v1.22.0-rc.0
	I0817 00:44:49.279820       1 config.go:315] Starting service config controller
	I0817 00:44:49.279889       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0817 00:44:49.302573       1 config.go:224] Starting endpoint slice config controller
	I0817 00:44:49.302629       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0817 00:44:49.410009       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	E0817 00:44:49.500512       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"newest-cni-20210817003608-111344.169bf16107717738", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03edf804fb5a1e4, ext:4123142401, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-newest-cni-20210817003608-111344", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"",
Name:"newest-cni-20210817003608-111344", UID:"newest-cni-20210817003608-111344", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "newest-cni-20210817003608-111344.169bf16107717738" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0817 00:44:49.582123       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [25fbe133425d] <==
	* E0817 00:44:04.688575       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:04.709320       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:04.713183       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 00:44:05.467160       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 00:44:05.599523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 00:44:05.734810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 00:44:05.749413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:05.760878       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:05.770968       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:05.942726       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:05.976287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 00:44:06.065026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 00:44:06.082831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 00:44:06.088192       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 00:44:06.194083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 00:44:06.248087       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 00:44:06.250149       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 00:44:06.267141       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 00:44:07.400274       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 00:44:07.572424       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 00:44:08.738764       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 00:44:13.221460       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0817 00:45:05.310507       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 00:45:05.326066       1 secure_serving.go:301] Stopped listening on 127.0.0.1:10259
	I0817 00:45:05.326122       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [90a9fc57f190] <==
	* W0817 00:46:12.220996       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0817 00:46:17.314325       1 serving.go:347] Generated self-signed cert in-memory
	W0817 00:46:30.776408       1 authentication.go:345] Error looking up in-cluster authentication configuration: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0817 00:46:30.776689       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 00:46:30.776706       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 00:46:32.938391       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0817 00:46:32.938565       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 00:46:32.945790       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0817 00:46:32.948113       1 secure_serving.go:195] Serving securely on 127.0.0.1:10259
	I0817 00:46:33.541038       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-17 00:45:32 UTC, end at Tue 2021-08-17 00:48:04 UTC. --
	Aug 17 00:48:00 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:48:00.999914     848 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3"
	Aug 17 00:48:01 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:01.010867     848 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/143a9dd09ce88e12ec2a22bbe8cc0ef3ae7ca0b95bd6a2b6697406686aa3bcbb/kubepods/besteffort/pode6293aae-9156-48d8-a313-445bf854634e/576570585ffb7bf39499187089d4de6249cc9508cef801f2d17add07e5e834e4\": RecentStats: unable to find data in memory cache]"
	Aug 17 00:48:01 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:01.627150     848 cni.go:361] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/metrics-server-7c784ccb57-vkvfp" podSandboxID={Type:docker ID:2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3} podNetnsPath="/proc/6386/ns/net" networkType="bridge" networkName="crio"
	Aug 17 00:48:01 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:01.843623     848 cni.go:380] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.24 -j CNI-97c5721b61b373becd59407a -m comment --comment name: \"crio\" id: \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-97c5721b61b373becd59407a':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/metrics-server-7c784ccb57-vkvfp" podSandboxID={Type:docker ID:2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3} podNetnsPath="/proc/6386/ns/net" networkType="bridge" networkName="crio"
	Aug 17 00:48:02 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:02.195261     848 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to set up pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to teardown pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.24 -j CNI-97c5721b61b373becd59407a -m comment --comment name: \"crio\" id: \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\" --wait]: exit sta
tus 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-97c5721b61b373becd59407a':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Aug 17 00:48:02 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:02.195361     848 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to set up pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to teardown pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.24 -j CNI-97c5721b61b373becd59407a -m comment --comment name: \"crio\" id: \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\" --wait]: exit status 2
: iptables v1.8.4 (legacy): Couldn't load target `CNI-97c5721b61b373becd59407a':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-7c784ccb57-vkvfp"
	Aug 17 00:48:02 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:02.195433     848 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to set up pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\" network for pod \"metrics-server-7c784ccb57-vkvfp\": networkPlugin cni failed to teardown pod \"metrics-server-7c784ccb57-vkvfp_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.24 -j CNI-97c5721b61b373becd59407a -m comment --comment name: \"crio\" id: \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\" --wait]: exit status 2
: iptables v1.8.4 (legacy): Couldn't load target `CNI-97c5721b61b373becd59407a':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-7c784ccb57-vkvfp"
	Aug 17 00:48:02 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:02.195830     848 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-7c784ccb57-vkvfp_kube-system(9ec6eb01-a852-4f2e-a8bb-0d9888bcf668)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-7c784ccb57-vkvfp_kube-system(9ec6eb01-a852-4f2e-a8bb-0d9888bcf668)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\\\" network for pod \\\"metrics-server-7c784ccb57-vkvfp\\\": networkPlugin cni failed to set up pod \\\"metrics-server-7c784ccb57-vkvfp_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\\\" network for pod \\\"metrics-server-7c784ccb57-vkvfp\\\": networkPlugin cni failed to teardown p
od \\\"metrics-server-7c784ccb57-vkvfp_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.24 -j CNI-97c5721b61b373becd59407a -m comment --comment name: \\\"crio\\\" id: \\\"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-97c5721b61b373becd59407a':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-7c784ccb57-vkvfp" podUID=9ec6eb01-a852-4f2e-a8bb-0d9888bcf668
	Aug 17 00:48:02 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:48:02.235438     848 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"metrics-server-7c784ccb57-vkvfp_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\""
	Aug 17 00:48:02 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:48:02.284529     848 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-78fcd69978-4rqlg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed48c01f6f64d942f7b0bb99581e9490a588c1ff99cb8b27680e5230ff3d2420\""
	Aug 17 00:48:02 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:48:02.376637     848 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ed48c01f6f64d942f7b0bb99581e9490a588c1ff99cb8b27680e5230ff3d2420"
	Aug 17 00:48:02 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:48:02.442185     848 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"ed48c01f6f64d942f7b0bb99581e9490a588c1ff99cb8b27680e5230ff3d2420\""
	Aug 17 00:48:02 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:02.964163     848 cni.go:361] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-smdrj" podSandboxID={Type:docker ID:f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2} podNetnsPath="/proc/6399/ns/net" networkType="bridge" networkName="crio"
	Aug 17 00:48:02 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:02.969629     848 cni.go:361] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-hf47r" podSandboxID={Type:docker ID:576570585ffb7bf39499187089d4de6249cc9508cef801f2d17add07e5e834e4} podNetnsPath="/proc/6422/ns/net" networkType="bridge" networkName="crio"
	Aug 17 00:48:03 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:48:03.330483     848 scope.go:110] "RemoveContainer" containerID="2333add2d120ccd04c2806f66429cf15a9fa8110a46b8d74c082271aca877f4f"
	Aug 17 00:48:03 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:03.333065     848 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(af23beac-6b23-4a97-9b39-7db56aa9f154)\"" pod="kube-system/storage-provisioner" podUID=af23beac-6b23-4a97-9b39-7db56aa9f154
	Aug 17 00:48:03 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:03.694018     848 cni.go:380] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.25 -j CNI-3b219aced193d090aa116f04 -m comment --comment name: \"crio\" id: \"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3b219aced193d090aa116f04':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-smdrj" podSandboxID={Type:docker ID:f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2} podNetnsPath="/proc/6399/ns/net" networkType="bridge" networkName="crio"
	Aug 17 00:48:03 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:03.705490     848 cni.go:380] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.26 -j CNI-18cf1b86fd81ebbd6c25a5fd -m comment --comment name: \"crio\" id: \"576570585ffb7bf39499187089d4de6249cc9508cef801f2d17add07e5e834e4\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-18cf1b86fd81ebbd6c25a5fd':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-hf47r" podSandboxID={Type:docker ID:576570585ffb7bf39499187089d4de6249cc9508cef801f2d17add07e5e834e4} podNetnsPath="/proc/6422/ns/net" networkType="bridge" networkName="crio"
	Aug 17 00:48:03 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:48:03.733462     848 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"metrics-server-7c784ccb57-vkvfp_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\""
	Aug 17 00:48:03 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:48:03.761552     848 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3"
	Aug 17 00:48:03 newest-cni-20210817003608-111344 kubelet[848]: I0817 00:48:03.839633     848 cni.go:333] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"2b6cb1780076b22363870dfc19f9a63bb42abc4fb2069cc1646d023e7358d2f3\""
	Aug 17 00:48:04 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:04.704471     848 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.25 -j CNI-3b219aced193d090aa116f04 -m comment --comment name: \"crio\" id: \"f5569e6ec01501acb2dfa5051ba8698963766d582be88e
2b680838efea8bcbc2\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3b219aced193d090aa116f04':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Aug 17 00:48:04 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:04.704649     848 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.25 -j CNI-3b219aced193d090aa116f04 -m comment --comment name: \"crio\" id: \"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680
838efea8bcbc2\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3b219aced193d090aa116f04':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-smdrj"
	Aug 17 00:48:04 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:04.704820     848 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2\" network for pod \"kubernetes-dashboard-6fcdf4f6d-smdrj\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.25 -j CNI-3b219aced193d090aa116f04 -m comment --comment name: \"crio\" id: \"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680
838efea8bcbc2\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3b219aced193d090aa116f04':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-smdrj"
	Aug 17 00:48:04 newest-cni-20210817003608-111344 kubelet[848]: E0817 00:48:04.704986     848 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard(4e529929-07ad-471c-9d24-fa48b90a186a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard(4e529929-07ad-471c-9d24-fa48b90a186a)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2\\\" network for pod \\\"kubernetes-dashboard-6fcdf4f6d-smdrj\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2\\\" network for pod \\\"kubernetes-dashboard-6fcdf4f
6d-smdrj\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-6fcdf4f6d-smdrj_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.25 -j CNI-3b219aced193d090aa116f04 -m comment --comment name: \\\"crio\\\" id: \\\"f5569e6ec01501acb2dfa5051ba8698963766d582be88e2b680838efea8bcbc2\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3b219aced193d090aa116f04':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-smdrj" podUID=4e529929-07ad-471c-9d24-fa48b90a186a
	
	* 
	* ==> storage-provisioner [2333add2d120] <==
	* I0817 00:47:06.312269       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0817 00:47:36.327797       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344: (4.9737625s)
helpers_test.go:262: (dbg) Run:  kubectl --context newest-cni-20210817003608-111344 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-78fcd69978-4rqlg metrics-server-7c784ccb57-vkvfp dashboard-metrics-scraper-8685c45546-hf47r kubernetes-dashboard-6fcdf4f6d-smdrj
helpers_test.go:273: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context newest-cni-20210817003608-111344 describe pod coredns-78fcd69978-4rqlg metrics-server-7c784ccb57-vkvfp dashboard-metrics-scraper-8685c45546-hf47r kubernetes-dashboard-6fcdf4f6d-smdrj
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context newest-cni-20210817003608-111344 describe pod coredns-78fcd69978-4rqlg metrics-server-7c784ccb57-vkvfp dashboard-metrics-scraper-8685c45546-hf47r kubernetes-dashboard-6fcdf4f6d-smdrj: exit status 1 (287.8702ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-78fcd69978-4rqlg" not found
	Error from server (NotFound): pods "metrics-server-7c784ccb57-vkvfp" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8685c45546-hf47r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-6fcdf4f6d-smdrj" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context newest-cni-20210817003608-111344 describe pod coredns-78fcd69978-4rqlg metrics-server-7c784ccb57-vkvfp dashboard-metrics-scraper-8685c45546-hf47r kubernetes-dashboard-6fcdf4f6d-smdrj: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (63.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (411.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20210817002204-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker
E0817 00:49:09.195800  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0817 00:49:56.770923  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:50:18.629302  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:50:20.388625  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:50:24.473119  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:50:28.753715  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.
E0817 00:50:46.356800  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:23.521957  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20210817002204-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: exit status 80 (6m51.3437992s)

                                                
                                                
-- stdout --
	* [kindnet-20210817002204-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	* Starting control plane node kindnet-20210817002204-111344 in cluster kindnet-20210817002204-111344
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 00:48:42.882795   50260 out.go:298] Setting OutFile to fd 3940 ...
	I0817 00:48:42.884257   50260 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:48:42.884257   50260 out.go:311] Setting ErrFile to fd 3228...
	I0817 00:48:42.884257   50260 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:48:42.903371   50260 out.go:305] Setting JSON to false
	I0817 00:48:42.920260   50260 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8369369,"bootTime":1620791953,"procs":148,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0817 00:48:42.920430   50260 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0817 00:48:42.923235   50260 out.go:177] * [kindnet-20210817002204-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0817 00:48:42.923763   50260 notify.go:169] Checking for updates...
	I0817 00:48:42.926013   50260 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:48:42.927897   50260 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0817 00:48:42.934462   50260 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 00:48:42.936040   50260 config.go:177] Loaded profile config "calico-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:48:42.936687   50260 config.go:177] Loaded profile config "custom-weave-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:48:42.937256   50260 config.go:177] Loaded profile config "enable-default-cni-20210817002157-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:48:42.937707   50260 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 00:48:44.714437   50260 docker.go:132] docker version: linux-20.10.2
	I0817 00:48:44.721222   50260 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:48:45.517625   50260 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:48:45.1435308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:48:45.525420   50260 out.go:177] * Using the docker driver based on user configuration
	I0817 00:48:45.525522   50260 start.go:278] selected driver: docker
	I0817 00:48:45.525522   50260 start.go:751] validating driver "docker" against <nil>
	I0817 00:48:45.525522   50260 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 00:48:45.612810   50260 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:48:46.412309   50260 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:62 SystemTime:2021-08-17 00:48:46.0511009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:48:46.412562   50260 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 00:48:46.413278   50260 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 00:48:46.413278   50260 cni.go:93] Creating CNI manager for "kindnet"
	I0817 00:48:46.413278   50260 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 00:48:46.413278   50260 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0817 00:48:46.413278   50260 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 00:48:46.413500   50260 start_flags.go:277] config:
	{Name:kindnet-20210817002204-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:kindnet-20210817002204-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:48:46.416607   50260 out.go:177] * Starting control plane node kindnet-20210817002204-111344 in cluster kindnet-20210817002204-111344
	I0817 00:48:46.416850   50260 cache.go:117] Beginning downloading kic base image for docker with docker
	I0817 00:48:46.424281   50260 out.go:177] * Pulling base image ...
	I0817 00:48:46.424525   50260 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:48:46.424686   50260 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 00:48:46.424686   50260 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0817 00:48:46.424850   50260 cache.go:56] Caching tarball of preloaded images
	I0817 00:48:46.425427   50260 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0817 00:48:46.425559   50260 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0817 00:48:46.425559   50260 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\config.json ...
	I0817 00:48:46.426164   50260 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\config.json: {Name:mkdac38c83d5bc4085e084acc4cd0286d4309b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:48:46.919466   50260 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 00:48:46.919466   50260 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 00:48:46.919792   50260 cache.go:205] Successfully downloaded all kic artifacts
	I0817 00:48:46.919929   50260 start.go:313] acquiring machines lock for kindnet-20210817002204-111344: {Name:mkb6b59e846320598fb8c956b8f5b8ff908a2d2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:48:46.920151   50260 start.go:317] acquired machines lock for "kindnet-20210817002204-111344" in 0s
	I0817 00:48:46.920370   50260 start.go:89] Provisioning new machine with config: &{Name:kindnet-20210817002204-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:kindnet-20210817002204-111344 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 00:48:46.920657   50260 start.go:126] createHost starting for "" (driver="docker")
	I0817 00:48:46.923301   50260 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0817 00:48:46.923862   50260 start.go:160] libmachine.API.Create for "kindnet-20210817002204-111344" (driver="docker")
	I0817 00:48:46.924107   50260 client.go:168] LocalClient.Create starting
	I0817 00:48:46.924750   50260 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0817 00:48:46.925125   50260 main.go:130] libmachine: Decoding PEM data...
	I0817 00:48:46.925244   50260 main.go:130] libmachine: Parsing certificate...
	I0817 00:48:46.925586   50260 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0817 00:48:46.925786   50260 main.go:130] libmachine: Decoding PEM data...
	I0817 00:48:46.925948   50260 main.go:130] libmachine: Parsing certificate...
	I0817 00:48:46.935964   50260 cli_runner.go:115] Run: docker network inspect kindnet-20210817002204-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 00:48:47.478153   50260 cli_runner.go:162] docker network inspect kindnet-20210817002204-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 00:48:47.486413   50260 network_create.go:255] running [docker network inspect kindnet-20210817002204-111344] to gather additional debugging logs...
	I0817 00:48:47.486413   50260 cli_runner.go:115] Run: docker network inspect kindnet-20210817002204-111344
	W0817 00:48:47.989674   50260 cli_runner.go:162] docker network inspect kindnet-20210817002204-111344 returned with exit code 1
	I0817 00:48:47.989674   50260 network_create.go:258] error running [docker network inspect kindnet-20210817002204-111344]: docker network inspect kindnet-20210817002204-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20210817002204-111344
	I0817 00:48:47.989849   50260 network_create.go:260] output of [docker network inspect kindnet-20210817002204-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20210817002204-111344
	
	** /stderr **
	I0817 00:48:47.997774   50260 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 00:48:48.484400   50260 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000158400] misses:0}
	I0817 00:48:48.484975   50260 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:48:48.484975   50260 network_create.go:106] attempt to create docker network kindnet-20210817002204-111344 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 00:48:48.491387   50260 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210817002204-111344
	W0817 00:48:49.037332   50260 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210817002204-111344 returned with exit code 1
	W0817 00:48:49.037592   50260 network_create.go:98] failed to create docker network kindnet-20210817002204-111344 192.168.49.0/24, will retry: subnet is taken
	I0817 00:48:49.042613   50260 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000158400] amended:false}} dirty:map[] misses:0}
	I0817 00:48:49.042613   50260 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:48:49.056232   50260 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000158400] amended:true}} dirty:map[192.168.49.0:0xc000158400 192.168.58.0:0xc000592178] misses:0}
	I0817 00:48:49.056232   50260 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:48:49.056232   50260 network_create.go:106] attempt to create docker network kindnet-20210817002204-111344 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0817 00:48:49.061147   50260 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210817002204-111344
	W0817 00:48:49.573617   50260 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210817002204-111344 returned with exit code 1
	W0817 00:48:49.573617   50260 network_create.go:98] failed to create docker network kindnet-20210817002204-111344 192.168.58.0/24, will retry: subnet is taken
	I0817 00:48:49.585122   50260 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000158400] amended:true}} dirty:map[192.168.49.0:0xc000158400 192.168.58.0:0xc000592178] misses:1}
	I0817 00:48:49.585122   50260 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:48:49.594167   50260 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000158400] amended:true}} dirty:map[192.168.49.0:0xc000158400 192.168.58.0:0xc000592178 192.168.67.0:0xc000592200] misses:1}
	I0817 00:48:49.594167   50260 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:48:49.594167   50260 network_create.go:106] attempt to create docker network kindnet-20210817002204-111344 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0817 00:48:49.599174   50260 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210817002204-111344
	W0817 00:48:50.156845   50260 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210817002204-111344 returned with exit code 1
	W0817 00:48:50.156961   50260 network_create.go:98] failed to create docker network kindnet-20210817002204-111344 192.168.67.0/24, will retry: subnet is taken
	I0817 00:48:50.172619   50260 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000158400] amended:true}} dirty:map[192.168.49.0:0xc000158400 192.168.58.0:0xc000592178 192.168.67.0:0xc000592200] misses:2}
	I0817 00:48:50.173636   50260 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:48:50.182658   50260 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000158400] amended:true}} dirty:map[192.168.49.0:0xc000158400 192.168.58.0:0xc000592178 192.168.67.0:0xc000592200 192.168.76.0:0xc000592278] misses:2}
	I0817 00:48:50.182658   50260 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:48:50.182809   50260 network_create.go:106] attempt to create docker network kindnet-20210817002204-111344 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0817 00:48:50.193289   50260 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210817002204-111344
	I0817 00:48:50.889712   50260 network_create.go:90] docker network kindnet-20210817002204-111344 192.168.76.0/24 created
	I0817 00:48:50.889712   50260 kic.go:106] calculated static IP "192.168.76.2" for the "kindnet-20210817002204-111344" container
	I0817 00:48:50.908946   50260 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 00:48:51.464539   50260 cli_runner.go:115] Run: docker volume create kindnet-20210817002204-111344 --label name.minikube.sigs.k8s.io=kindnet-20210817002204-111344 --label created_by.minikube.sigs.k8s.io=true
	I0817 00:48:51.950723   50260 oci.go:102] Successfully created a docker volume kindnet-20210817002204-111344
	I0817 00:48:51.958800   50260 cli_runner.go:115] Run: docker run --rm --name kindnet-20210817002204-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210817002204-111344 --entrypoint /usr/bin/test -v kindnet-20210817002204-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 00:48:54.674620   50260 cli_runner.go:168] Completed: docker run --rm --name kindnet-20210817002204-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210817002204-111344 --entrypoint /usr/bin/test -v kindnet-20210817002204-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (2.7153712s)
	I0817 00:48:54.674967   50260 oci.go:106] Successfully prepared a docker volume kindnet-20210817002204-111344
	I0817 00:48:54.675183   50260 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:48:54.675307   50260 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 00:48:54.683917   50260 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:48:54.689672   50260 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210817002204-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	W0817 00:48:55.235620   50260 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210817002204-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0817 00:48:55.235620   50260 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210817002204-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ������������������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	����   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	��������8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>�����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0817 00:48:55.516162   50260 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:48:55.1415975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:48:55.526324   50260 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 00:48:56.337292   50260 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20210817002204-111344 --name kindnet-20210817002204-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210817002204-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20210817002204-111344 --network kindnet-20210817002204-111344 --ip 192.168.76.2 --volume kindnet-20210817002204-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 00:48:59.448057   50260 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20210817002204-111344 --name kindnet-20210817002204-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210817002204-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20210817002204-111344 --network kindnet-20210817002204-111344 --ip 192.168.76.2 --volume kindnet-20210817002204-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: (3.1106471s)
	I0817 00:48:59.450715   50260 cli_runner.go:115] Run: docker container inspect kindnet-20210817002204-111344 --format={{.State.Running}}
	I0817 00:48:59.987766   50260 cli_runner.go:115] Run: docker container inspect kindnet-20210817002204-111344 --format={{.State.Status}}
	I0817 00:49:00.538723   50260 cli_runner.go:115] Run: docker exec kindnet-20210817002204-111344 stat /var/lib/dpkg/alternatives/iptables
	I0817 00:49:01.408163   50260 oci.go:278] the created container "kindnet-20210817002204-111344" has a running status.
	I0817 00:49:01.408434   50260 kic.go:210] Creating ssh key for kic: C:\Users\jenkins\minikube-integration\.minikube\machines\kindnet-20210817002204-111344\id_rsa...
	I0817 00:49:02.073218   50260 kic_runner.go:188] docker (temp): C:\Users\jenkins\minikube-integration\.minikube\machines\kindnet-20210817002204-111344\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 00:49:03.052288   50260 cli_runner.go:115] Run: docker container inspect kindnet-20210817002204-111344 --format={{.State.Status}}
	I0817 00:49:03.600810   50260 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 00:49:03.601500   50260 kic_runner.go:115] Args: [docker exec --privileged kindnet-20210817002204-111344 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 00:49:04.449909   50260 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins\minikube-integration\.minikube\machines\kindnet-20210817002204-111344\id_rsa...
	I0817 00:49:05.077369   50260 cli_runner.go:115] Run: docker container inspect kindnet-20210817002204-111344 --format={{.State.Status}}
	I0817 00:49:05.593821   50260 machine.go:88] provisioning docker machine ...
	I0817 00:49:05.593821   50260 ubuntu.go:169] provisioning hostname "kindnet-20210817002204-111344"
	I0817 00:49:05.599091   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:06.125633   50260 main.go:130] libmachine: Using SSH client type: native
	I0817 00:49:06.135424   50260 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55248 <nil> <nil>}
	I0817 00:49:06.135424   50260 main.go:130] libmachine: About to run SSH command:
	sudo hostname kindnet-20210817002204-111344 && echo "kindnet-20210817002204-111344" | sudo tee /etc/hostname
	I0817 00:49:06.545399   50260 main.go:130] libmachine: SSH cmd err, output: <nil>: kindnet-20210817002204-111344
	
	I0817 00:49:06.551470   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:07.081139   50260 main.go:130] libmachine: Using SSH client type: native
	I0817 00:49:07.081849   50260 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55248 <nil> <nil>}
	I0817 00:49:07.081849   50260 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20210817002204-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20210817002204-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20210817002204-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 00:49:07.436913   50260 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:49:07.437071   50260 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0817 00:49:07.437167   50260 ubuntu.go:177] setting up certificates
	I0817 00:49:07.437167   50260 provision.go:83] configureAuth start
	I0817 00:49:07.451964   50260 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20210817002204-111344
	I0817 00:49:07.987645   50260 provision.go:138] copyHostCerts
	I0817 00:49:07.988148   50260 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0817 00:49:07.988148   50260 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0817 00:49:07.988638   50260 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0817 00:49:07.990111   50260 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0817 00:49:07.990111   50260 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0817 00:49:07.990497   50260 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0817 00:49:07.991876   50260 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0817 00:49:07.991876   50260 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0817 00:49:07.992626   50260 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0817 00:49:07.993801   50260 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-20210817002204-111344 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20210817002204-111344]
	I0817 00:49:08.153138   50260 provision.go:172] copyRemoteCerts
	I0817 00:49:08.161010   50260 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 00:49:08.165898   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:08.654621   50260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\kindnet-20210817002204-111344\id_rsa Username:docker}
	I0817 00:49:08.924996   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 00:49:09.101200   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1261 bytes)
	I0817 00:49:09.223539   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 00:49:09.400941   50260 provision.go:86] duration metric: configureAuth took 1.9636997s
	I0817 00:49:09.400941   50260 ubuntu.go:193] setting minikube options for container-runtime
	I0817 00:49:09.401700   50260 config.go:177] Loaded profile config "kindnet-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:49:09.409052   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:09.917486   50260 main.go:130] libmachine: Using SSH client type: native
	I0817 00:49:09.918048   50260 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55248 <nil> <nil>}
	I0817 00:49:09.918048   50260 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0817 00:49:10.357477   50260 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0817 00:49:10.357477   50260 ubuntu.go:71] root file system type: overlay
	I0817 00:49:10.357477   50260 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0817 00:49:10.367676   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:10.879453   50260 main.go:130] libmachine: Using SSH client type: native
	I0817 00:49:10.880000   50260 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55248 <nil> <nil>}
	I0817 00:49:10.880165   50260 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0817 00:49:11.333419   50260 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0817 00:49:11.345637   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:11.859113   50260 main.go:130] libmachine: Using SSH client type: native
	I0817 00:49:11.859736   50260 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55248 <nil> <nil>}
	I0817 00:49:11.859971   50260 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:49:16.450351   50260 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-17 00:49:11.312939000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0817 00:49:16.450351   50260 machine.go:91] provisioned docker machine in 10.8561159s
	I0817 00:49:16.450586   50260 client.go:171] LocalClient.Create took 29.5253543s
	I0817 00:49:16.450586   50260 start.go:168] duration metric: libmachine.API.Create for "kindnet-20210817002204-111344" took 29.5255992s
	I0817 00:49:16.450586   50260 start.go:267] post-start starting for "kindnet-20210817002204-111344" (driver="docker")
	I0817 00:49:16.450586   50260 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:49:16.459715   50260 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:49:16.466994   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:16.982404   50260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\kindnet-20210817002204-111344\id_rsa Username:docker}
	I0817 00:49:17.212748   50260 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:49:17.231110   50260 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:49:17.231265   50260 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:49:17.231265   50260 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:49:17.231653   50260 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 00:49:17.232142   50260 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:49:17.232929   50260 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:49:17.235084   50260 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:49:17.247197   50260 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:49:17.296064   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:49:17.505564   50260 start.go:270] post-start completed in 1.0549375s
	I0817 00:49:17.518959   50260 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20210817002204-111344
	I0817 00:49:18.018227   50260 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\config.json ...
	I0817 00:49:18.030603   50260 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:49:18.036192   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:18.542209   50260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\kindnet-20210817002204-111344\id_rsa Username:docker}
	I0817 00:49:18.748421   50260 start.go:129] duration metric: createHost completed in 31.8265516s
	I0817 00:49:18.749647   50260 start.go:80] releasing machines lock for "kindnet-20210817002204-111344", held for 31.8280652s
	I0817 00:49:18.754620   50260 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20210817002204-111344
	I0817 00:49:19.231515   50260 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 00:49:19.235118   50260 ssh_runner.go:149] Run: systemctl --version
	I0817 00:49:19.245173   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:19.245411   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:19.741463   50260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\kindnet-20210817002204-111344\id_rsa Username:docker}
	I0817 00:49:19.784179   50260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\kindnet-20210817002204-111344\id_rsa Username:docker}
	I0817 00:49:20.013697   50260 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0817 00:49:20.240111   50260 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:49:20.307359   50260 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0817 00:49:20.320756   50260 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0817 00:49:20.390726   50260 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 00:49:20.483330   50260 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0817 00:49:20.898163   50260 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0817 00:49:21.451915   50260 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:49:21.539040   50260 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:49:22.022253   50260 ssh_runner.go:149] Run: sudo systemctl start docker
	I0817 00:49:22.106513   50260 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:49:22.525554   50260 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:49:22.843731   50260 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	I0817 00:49:22.854759   50260 cli_runner.go:115] Run: docker exec -t kindnet-20210817002204-111344 dig +short host.docker.internal
	I0817 00:49:23.715406   50260 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0817 00:49:23.726458   50260 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0817 00:49:23.774651   50260 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:49:23.875352   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:49:24.381336   50260 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:49:24.390435   50260 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:49:24.665944   50260 docker.go:535] Got preloaded images: 
	I0817 00:49:24.666769   50260 docker.go:541] k8s.gcr.io/kube-apiserver:v1.21.3 wasn't preloaded
	I0817 00:49:24.673784   50260 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:49:24.737268   50260 ssh_runner.go:149] Run: which lz4
	I0817 00:49:24.772859   50260 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0817 00:49:24.816544   50260 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0817 00:49:24.816864   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (504826016 bytes)
	I0817 00:50:03.210802   50260 docker.go:500] Took 38.443937 seconds to copy over tarball
	I0817 00:50:03.219024   50260 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 00:50:16.781243   50260 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (13.5615736s)
	I0817 00:50:16.781465   50260 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0817 00:50:17.609515   50260 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:50:17.659245   50260 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3152 bytes)
	I0817 00:50:17.780903   50260 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0817 00:50:18.108527   50260 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0817 00:50:19.405211   50260 ssh_runner.go:189] Completed: sudo systemctl restart docker: (1.2966345s)
	I0817 00:50:19.416348   50260 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:50:19.648402   50260 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0817 00:50:19.648751   50260 cache_images.go:74] Images are preloaded, skipping loading
	I0817 00:50:19.656650   50260 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0817 00:50:20.105516   50260 cni.go:93] Creating CNI manager for "kindnet"
	I0817 00:50:20.105711   50260 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 00:50:20.105911   50260 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20210817002204-111344 NodeName:kindnet-20210817002204-111344 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 00:50:20.106788   50260 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20210817002204-111344"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 00:50:20.107366   50260 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20210817002204-111344 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:kindnet-20210817002204-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0817 00:50:20.119180   50260 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 00:50:20.148759   50260 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 00:50:20.156603   50260 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 00:50:20.186350   50260 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (407 bytes)
	I0817 00:50:20.263603   50260 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 00:50:20.316866   50260 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0817 00:50:20.396609   50260 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0817 00:50:20.419139   50260 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:50:20.467373   50260 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344 for IP: 192.168.76.2
	I0817 00:50:20.467959   50260 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0817 00:50:20.468363   50260 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0817 00:50:20.469029   50260 certs.go:297] generating minikube-user signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\client.key
	I0817 00:50:20.469029   50260 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\client.crt with IP's: []
	I0817 00:50:20.585430   50260 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\client.crt ...
	I0817 00:50:20.585430   50260 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\client.crt: {Name:mkdc53b837484a42bfdd433175019f191911a3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:50:20.588295   50260 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\client.key ...
	I0817 00:50:20.588295   50260 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\client.key: {Name:mk6ee6b18328cc3922c749bb5e1ceb3df4893b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:50:20.590413   50260 certs.go:297] generating minikube signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.key.31bdca25
	I0817 00:50:20.591399   50260 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 00:50:21.070400   50260 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.crt.31bdca25 ...
	I0817 00:50:21.070400   50260 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.crt.31bdca25: {Name:mkb3be30f1db088d1ae39d1f7b1d19df6b07314b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:50:21.073403   50260 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.key.31bdca25 ...
	I0817 00:50:21.073403   50260 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.key.31bdca25: {Name:mk9eae9d3d39ce96f86de6a7593972dec0b2c71c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:50:21.074410   50260 certs.go:308] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.crt.31bdca25 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.crt
	I0817 00:50:21.081425   50260 certs.go:312] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.key.31bdca25 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.key
	I0817 00:50:21.091438   50260 certs.go:297] generating aggregator signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\proxy-client.key
	I0817 00:50:21.091438   50260 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\proxy-client.crt with IP's: []
	I0817 00:50:21.188000   50260 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\proxy-client.crt ...
	I0817 00:50:21.188000   50260 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\proxy-client.crt: {Name:mk9895e06892bf98a8e45b694a251bcbf604936f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:50:21.190597   50260 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\proxy-client.key ...
	I0817 00:50:21.190597   50260 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\proxy-client.key: {Name:mk6e021344ff37d3c7b0d65fd0b98f2f0da63b16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:50:21.203248   50260 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem (1338 bytes)
	W0817 00:50:21.203844   50260 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344_empty.pem, impossibly tiny 0 bytes
	I0817 00:50:21.203974   50260 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0817 00:50:21.204483   50260 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0817 00:50:21.204907   50260 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0817 00:50:21.205020   50260 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0817 00:50:21.205888   50260 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem (1708 bytes)
	I0817 00:50:21.210357   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 00:50:21.349662   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 00:50:21.482413   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 00:50:21.605532   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\kindnet-20210817002204-111344\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 00:50:21.715278   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 00:50:21.826853   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 00:50:21.927077   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 00:50:22.019587   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 00:50:22.147714   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 00:50:22.236220   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem --> /usr/share/ca-certificates/111344.pem (1338 bytes)
	I0817 00:50:22.313254   50260 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /usr/share/ca-certificates/1113442.pem (1708 bytes)
	I0817 00:50:22.393045   50260 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 00:50:22.485617   50260 ssh_runner.go:149] Run: openssl version
	I0817 00:50:22.516729   50260 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111344.pem && ln -fs /usr/share/ca-certificates/111344.pem /etc/ssl/certs/111344.pem"
	I0817 00:50:22.550543   50260 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/111344.pem
	I0817 00:50:22.574717   50260 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 23:23 /usr/share/ca-certificates/111344.pem
	I0817 00:50:22.581379   50260 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111344.pem
	I0817 00:50:22.615384   50260 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/111344.pem /etc/ssl/certs/51391683.0"
	I0817 00:50:22.658830   50260 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1113442.pem && ln -fs /usr/share/ca-certificates/1113442.pem /etc/ssl/certs/1113442.pem"
	I0817 00:50:22.704930   50260 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1113442.pem
	I0817 00:50:22.723412   50260 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 23:23 /usr/share/ca-certificates/1113442.pem
	I0817 00:50:22.732529   50260 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1113442.pem
	I0817 00:50:22.772802   50260 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1113442.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 00:50:22.835938   50260 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 00:50:22.881400   50260 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:50:22.899129   50260 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:50:22.907132   50260 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:50:22.947358   50260 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 00:50:22.984527   50260 kubeadm.go:390] StartCluster: {Name:kindnet-20210817002204-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:kindnet-20210817002204-111344 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:50:22.990184   50260 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:50:23.140253   50260 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 00:50:23.186654   50260 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 00:50:23.230420   50260 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 00:50:23.238591   50260 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 00:50:23.270878   50260 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 00:50:23.270878   50260 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 00:51:16.701777   50260 out.go:204]   - Generating certificates and keys ...
	I0817 00:51:16.703093   50260 out.go:204]   - Booting up control plane ...
	I0817 00:51:16.719560   50260 out.go:204]   - Configuring RBAC rules ...
	I0817 00:51:16.723069   50260 cni.go:93] Creating CNI manager for "kindnet"
	I0817 00:51:16.725380   50260 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 00:51:16.736408   50260 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0817 00:51:16.794520   50260 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0817 00:51:16.794520   50260 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0817 00:51:17.046179   50260 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 00:51:23.654936   50260 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (6.6081411s)
	I0817 00:51:23.655068   50260 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 00:51:23.668842   50260 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:51:23.670853   50260 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=kindnet-20210817002204-111344 minikube.k8s.io/updated_at=2021_08_17T00_51_23_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:51:24.189472   50260 ops.go:34] apiserver oom_adj: -16
	I0817 00:51:25.599920   50260 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.9310037s)
	I0817 00:51:25.608922   50260 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:51:26.829384   50260 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=kindnet-20210817002204-111344 minikube.k8s.io/updated_at=2021_08_17T00_51_23_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (3.1582587s)
	I0817 00:51:27.322628   50260 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.7136409s)
	I0817 00:51:27.831526   50260 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:51:29.658400   50260 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.8268039s)
	I0817 00:51:29.658400   50260 kubeadm.go:985] duration metric: took 6.0029696s to wait for elevateKubeSystemPrivileges.
	I0817 00:51:29.658400   50260 kubeadm.go:392] StartCluster complete in 1m6.6713245s
	I0817 00:51:29.658400   50260 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:51:29.658400   50260 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:51:29.674244   50260 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:51:30.280119   50260 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20210817002204-111344" rescaled to 1
	I0817 00:51:30.280119   50260 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 00:51:30.280119   50260 out.go:177] * Verifying Kubernetes components...
	I0817 00:51:30.280119   50260 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 00:51:30.280119   50260 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 00:51:30.280119   50260 config.go:177] Loaded profile config "kindnet-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:51:30.280119   50260 addons.go:59] Setting storage-provisioner=true in profile "kindnet-20210817002204-111344"
	I0817 00:51:30.280119   50260 addons.go:59] Setting default-storageclass=true in profile "kindnet-20210817002204-111344"
	I0817 00:51:30.280119   50260 addons.go:135] Setting addon storage-provisioner=true in "kindnet-20210817002204-111344"
	I0817 00:51:30.280119   50260 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20210817002204-111344"
	W0817 00:51:30.280119   50260 addons.go:147] addon storage-provisioner should already be in state true
	I0817 00:51:30.280119   50260 host.go:66] Checking if "kindnet-20210817002204-111344" exists ...
	I0817 00:51:30.295090   50260 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0817 00:51:30.297091   50260 cli_runner.go:115] Run: docker container inspect kindnet-20210817002204-111344 --format={{.State.Status}}
	I0817 00:51:30.297091   50260 cli_runner.go:115] Run: docker container inspect kindnet-20210817002204-111344 --format={{.State.Status}}
	I0817 00:51:30.826578   50260 addons.go:135] Setting addon default-storageclass=true in "kindnet-20210817002204-111344"
	W0817 00:51:30.826578   50260 addons.go:147] addon default-storageclass should already be in state true
	I0817 00:51:30.826775   50260 host.go:66] Checking if "kindnet-20210817002204-111344" exists ...
	I0817 00:51:30.842007   50260 cli_runner.go:115] Run: docker container inspect kindnet-20210817002204-111344 --format={{.State.Status}}
	I0817 00:51:30.862016   50260 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:51:30.862016   50260 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:51:30.862016   50260 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 00:51:30.877531   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:51:31.380773   50260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\kindnet-20210817002204-111344\id_rsa Username:docker}
	I0817 00:51:31.392902   50260 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 00:51:31.392902   50260 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 00:51:31.402716   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:51:31.930806   50260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55248 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\kindnet-20210817002204-111344\id_rsa Username:docker}
	I0817 00:51:33.343459   50260 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (3.0632228s)
	I0817 00:51:33.343459   50260 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (3.0482522s)
	I0817 00:51:33.343711   50260 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 00:51:33.350331   50260 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20210817002204-111344
	I0817 00:51:33.417075   50260 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:51:33.495069   50260 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 00:51:33.842811   50260 node_ready.go:35] waiting up to 5m0s for node "kindnet-20210817002204-111344" to be "Ready" ...
	I0817 00:51:35.890142   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:51:37.904313   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:51:40.411690   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:51:40.986961   50260 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.6429578s)
	I0817 00:51:40.987113   50260 start.go:728] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0817 00:51:41.746542   50260 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.3285038s)
	I0817 00:51:41.746745   50260 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.2513614s)
	I0817 00:51:41.755474   50260 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 00:51:41.755474   50260 addons.go:344] enableAddons completed in 11.4749166s
	I0817 00:51:42.899555   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:51:45.413999   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:51:47.891122   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:51:50.391527   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:51:52.884263   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:51:55.388769   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:51:58.096681   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:00.380436   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:02.384176   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:04.881587   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:06.889035   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:09.380321   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:11.384013   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:13.883197   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:15.887091   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:18.417137   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:20.890442   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:23.382532   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:25.888413   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:28.382288   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:30.383659   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:32.389295   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:34.883548   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:36.884666   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:39.385434   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:41.393240   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:43.399978   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:45.883115   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:47.886955   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:50.380936   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:52.386026   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:54.391806   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:56.405018   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:52:58.419432   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:00.886449   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:02.891808   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:04.897538   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:07.382097   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:09.595093   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:11.897558   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:13.928347   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:16.387392   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:18.393441   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:20.888022   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:23.386478   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:25.882140   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:27.885778   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:29.999969   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:32.380390   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:34.385152   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:36.886632   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:39.384745   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:41.897185   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:44.383739   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:46.385314   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:48.387511   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:50.400011   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:52.888828   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:55.384987   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:57.884341   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:53:59.886598   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:01.886934   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:04.395748   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:06.897577   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:09.386832   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:11.387218   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:13.387973   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:15.887820   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:17.899429   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:20.386080   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:22.397054   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:24.896989   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:26.907389   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:29.388729   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:31.912873   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:34.386049   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:36.386953   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:38.886800   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:40.887092   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:43.391857   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:45.886161   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:47.965819   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:50.385311   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:52.391908   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:54.888216   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:56.888324   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:54:59.384625   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:01.441639   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:03.888342   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:05.893151   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:08.385350   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:10.387361   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:12.387661   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:14.888618   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:17.398438   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:19.887097   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:21.889243   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:24.390160   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:26.887025   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:28.887979   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:31.393988   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:33.893204   50260 node_ready.go:58] node "kindnet-20210817002204-111344" has status "Ready":"False"
	I0817 00:55:33.912761   50260 node_ready.go:38] duration metric: took 4m0.0608065s waiting for node "kindnet-20210817002204-111344" to be "Ready" ...
	I0817 00:55:33.915251   50260 out.go:177] 
	W0817 00:55:33.915971   50260 out.go:242] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0817 00:55:33.915971   50260 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0817 00:55:33.917868   50260 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - C:\Users\jenkins\minikube-integration\.minikube\logs\lastStart.txt    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - C:\Users\jenkins\minikube-integration\.minikube\logs\lastStart.txt    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0817 00:55:33.923198   50260 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (411.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (366.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
E0817 00:52:04.978187  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
E0817 00:52:15.222040  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.818268s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
E0817 00:52:22.811216  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:52:35.704635  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.9123024s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.7642505s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.8898439s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.7460859s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
E0817 00:53:45.892970  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4793473s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
E0817 00:54:09.206781  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6002224s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.7723761s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0817 00:54:38.593246  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:54:56.782465  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5250848s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0817 00:55:18.639708  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:55:20.399321  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:55:28.764084  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.8670546s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
E0817 00:56:23.534043  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5565612s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0817 00:56:51.837197  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.
E0817 00:56:54.733911  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:57:01.146845  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0817 00:57:22.442217  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:57:22.822427  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.833108s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:168: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:173: failed nslookup: got=";; connection timed out; no servers could be reached\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (366.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (521.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20210817002157-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker
E0817 00:53:16.666884  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-20210817002157-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: exit status 1 (8m40.9108666s)

                                                
                                                
-- stdout --
	* [bridge-20210817002157-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	* Starting control plane node bridge-20210817002157-111344 in cluster bridge-20210817002157-111344
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "bridge-20210817002157-111344" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 00:53:16.943636   53120 out.go:298] Setting OutFile to fd 3628 ...
	I0817 00:53:16.945278   53120 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:53:16.945278   53120 out.go:311] Setting ErrFile to fd 4088...
	I0817 00:53:16.945278   53120 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0817 00:53:16.962705   53120 out.go:305] Setting JSON to false
	I0817 00:53:16.975481   53120 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8369644,"bootTime":1620791952,"procs":145,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0817 00:53:16.976477   53120 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0817 00:53:16.979482   53120 out.go:177] * [bridge-20210817002157-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0817 00:53:16.979482   53120 notify.go:169] Checking for updates...
	I0817 00:53:16.981515   53120 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:53:16.983492   53120 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0817 00:53:16.984498   53120 out.go:177]   - MINIKUBE_LOCATION=12230
	I0817 00:53:16.986472   53120 config.go:177] Loaded profile config "calico-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:53:16.986472   53120 config.go:177] Loaded profile config "enable-default-cni-20210817002157-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:53:16.986472   53120 config.go:177] Loaded profile config "kindnet-20210817002204-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:53:16.986472   53120 driver.go:335] Setting default libvirt URI to qemu:///system
	I0817 00:53:18.720205   53120 docker.go:132] docker version: linux-20.10.2
	I0817 00:53:18.726313   53120 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:53:19.565685   53120 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:53:19.1836596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:53:19.568507   53120 out.go:177] * Using the docker driver based on user configuration
	I0817 00:53:19.568567   53120 start.go:278] selected driver: docker
	I0817 00:53:19.568689   53120 start.go:751] validating driver "docker" against <nil>
	I0817 00:53:19.568785   53120 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0817 00:53:19.771583   53120 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:53:20.546227   53120 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:61 SystemTime:2021-08-17 00:53:20.2371433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:53:20.546227   53120 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0817 00:53:20.547457   53120 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 00:53:20.547457   53120 cni.go:93] Creating CNI manager for "bridge"
	I0817 00:53:20.547699   53120 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0817 00:53:20.547699   53120 start_flags.go:277] config:
	{Name:bridge-20210817002157-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:bridge-20210817002157-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:53:20.553034   53120 out.go:177] * Starting control plane node bridge-20210817002157-111344 in cluster bridge-20210817002157-111344
	I0817 00:53:20.553275   53120 cache.go:117] Beginning downloading kic base image for docker with docker
	I0817 00:53:20.554325   53120 out.go:177] * Pulling base image ...
	I0817 00:53:20.554325   53120 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:53:20.554325   53120 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0817 00:53:20.554325   53120 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0817 00:53:20.554325   53120 cache.go:56] Caching tarball of preloaded images
	I0817 00:53:20.554325   53120 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0817 00:53:20.554325   53120 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0817 00:53:20.554325   53120 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\config.json ...
	I0817 00:53:20.554325   53120 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\config.json: {Name:mk6cd4d0ec338b4f6bd51faadd13da5077ea0504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:53:21.054008   53120 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon, skipping pull
	I0817 00:53:21.054008   53120 cache.go:139] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in daemon, skipping load
	I0817 00:53:21.054234   53120 cache.go:205] Successfully downloaded all kic artifacts
	I0817 00:53:21.054816   53120 start.go:313] acquiring machines lock for bridge-20210817002157-111344: {Name:mkb9ba5241d5e0c05fd26c132158e9474415db2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:53:21.055146   53120 start.go:317] acquired machines lock for "bridge-20210817002157-111344" in 330.4µs
	I0817 00:53:21.055396   53120 start.go:89] Provisioning new machine with config: &{Name:bridge-20210817002157-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:bridge-20210817002157-111344 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 00:53:21.055640   53120 start.go:126] createHost starting for "" (driver="docker")
	I0817 00:53:21.058014   53120 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0817 00:53:21.058374   53120 start.go:160] libmachine.API.Create for "bridge-20210817002157-111344" (driver="docker")
	I0817 00:53:21.058617   53120 client.go:168] LocalClient.Create starting
	I0817 00:53:21.059467   53120 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0817 00:53:21.059764   53120 main.go:130] libmachine: Decoding PEM data...
	I0817 00:53:21.059875   53120 main.go:130] libmachine: Parsing certificate...
	I0817 00:53:21.060332   53120 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0817 00:53:21.060519   53120 main.go:130] libmachine: Decoding PEM data...
	I0817 00:53:21.060660   53120 main.go:130] libmachine: Parsing certificate...
	I0817 00:53:21.076977   53120 cli_runner.go:115] Run: docker network inspect bridge-20210817002157-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 00:53:21.593585   53120 cli_runner.go:162] docker network inspect bridge-20210817002157-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 00:53:21.606279   53120 network_create.go:255] running [docker network inspect bridge-20210817002157-111344] to gather additional debugging logs...
	I0817 00:53:21.606425   53120 cli_runner.go:115] Run: docker network inspect bridge-20210817002157-111344
	W0817 00:53:22.116948   53120 cli_runner.go:162] docker network inspect bridge-20210817002157-111344 returned with exit code 1
	I0817 00:53:22.117276   53120 network_create.go:258] error running [docker network inspect bridge-20210817002157-111344]: docker network inspect bridge-20210817002157-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20210817002157-111344
	I0817 00:53:22.117276   53120 network_create.go:260] output of [docker network inspect bridge-20210817002157-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20210817002157-111344
	
	** /stderr **
	I0817 00:53:22.123219   53120 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 00:53:22.643260   53120 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00077c260] misses:0}
	I0817 00:53:22.643260   53120 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:53:22.643260   53120 network_create.go:106] attempt to create docker network bridge-20210817002157-111344 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 00:53:22.645383   53120 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20210817002157-111344
	W0817 00:53:23.116198   53120 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20210817002157-111344 returned with exit code 1
	W0817 00:53:23.116198   53120 network_create.go:98] failed to create docker network bridge-20210817002157-111344 192.168.49.0/24, will retry: subnet is taken
	I0817 00:53:23.126958   53120 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00077c260] amended:false}} dirty:map[] misses:0}
	I0817 00:53:23.127160   53120 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:53:23.135609   53120 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00077c260] amended:true}} dirty:map[192.168.49.0:0xc00077c260 192.168.58.0:0xc00076c180] misses:0}
	I0817 00:53:23.135609   53120 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:53:23.135609   53120 network_create.go:106] attempt to create docker network bridge-20210817002157-111344 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0817 00:53:23.141146   53120 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20210817002157-111344
	I0817 00:53:23.800811   53120 network_create.go:90] docker network bridge-20210817002157-111344 192.168.58.0/24 created
	I0817 00:53:23.800811   53120 kic.go:106] calculated static IP "192.168.58.2" for the "bridge-20210817002157-111344" container
	I0817 00:53:23.815596   53120 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 00:53:24.277331   53120 cli_runner.go:115] Run: docker volume create bridge-20210817002157-111344 --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --label created_by.minikube.sigs.k8s.io=true
	I0817 00:53:27.257327   53120 cli_runner.go:168] Completed: docker volume create bridge-20210817002157-111344 --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --label created_by.minikube.sigs.k8s.io=true: (2.9796114s)
	I0817 00:53:27.257327   53120 oci.go:102] Successfully created a docker volume bridge-20210817002157-111344
	I0817 00:53:27.266346   53120 cli_runner.go:115] Run: docker run --rm --name bridge-20210817002157-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --entrypoint /usr/bin/test -v bridge-20210817002157-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 00:53:29.317557   53120 cli_runner.go:168] Completed: docker run --rm --name bridge-20210817002157-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --entrypoint /usr/bin/test -v bridge-20210817002157-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (2.0511329s)
	I0817 00:53:29.318035   53120 oci.go:106] Successfully prepared a docker volume bridge-20210817002157-111344
	I0817 00:53:29.318182   53120 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:53:29.318276   53120 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 00:53:29.325159   53120 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20210817002157-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 00:53:29.325879   53120 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W0817 00:53:29.859893   53120 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20210817002157-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0817 00:53:29.860395   53120 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20210817002157-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ������������������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	����   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	��������8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>�����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0817 00:53:30.102572   53120 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:56 SystemTime:2021-08-17 00:53:29.7662209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:53:30.119215   53120 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 00:53:30.859899   53120 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20210817002157-111344 --name bridge-20210817002157-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20210817002157-111344 --network bridge-20210817002157-111344 --ip 192.168.58.2 --volume bridge-20210817002157-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	W0817 00:53:31.541179   53120 cli_runner.go:162] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20210817002157-111344 --name bridge-20210817002157-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20210817002157-111344 --network bridge-20210817002157-111344 --ip 192.168.58.2 --volume bridge-20210817002157-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 returned with exit code 125
	I0817 00:53:31.541179   53120 client.go:171] LocalClient.Create took 10.4821621s
	I0817 00:53:33.548908   53120 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:53:33.555767   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	W0817 00:53:34.010816   53120 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344 returned with exit code 1
	I0817 00:53:34.011282   53120 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:53:34.297118   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	W0817 00:53:34.728416   53120 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344 returned with exit code 1
	I0817 00:53:34.728787   53120 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:53:35.275894   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	W0817 00:53:35.719851   53120 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344 returned with exit code 1
	W0817 00:53:35.720128   53120 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0817 00:53:35.720517   53120 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0817 00:53:35.720517   53120 start.go:129] duration metric: createHost completed in 14.6643186s
	I0817 00:53:35.720517   53120 start.go:80] releasing machines lock for "bridge-20210817002157-111344", held for 14.6647387s
	W0817 00:53:35.720826   53120 start.go:521] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20210817002157-111344 --name bridge-20210817002157-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20210817002157-111344 --network bridge-20210817002157-111344 --ip 192.168.58.2 --volume bridge-20210817002157-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: e
xit status 125
	stdout:
	823cfd48f5b9d35d4ffd8b0aa1d97ea216bb1105638747a72311da2cb02ecc97
	
	stderr:
	docker: Error response from daemon: network bridge-20210817002157-111344 not found.
	I0817 00:53:35.735316   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	W0817 00:53:36.203078   53120 start.go:526] delete host: Docker machine "bridge-20210817002157-111344" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0817 00:53:36.203448   53120 out.go:242] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20210817002157-111344 --name bridge-20210817002157-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20210817002157-111344 --network bridge-20210817002157-111344 --ip 192.168.58.2 --volume bridge-20210817002157-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a
4ecbf0f9f9c1b6: exit status 125
	stdout:
	823cfd48f5b9d35d4ffd8b0aa1d97ea216bb1105638747a72311da2cb02ecc97
	
	stderr:
	docker: Error response from daemon: network bridge-20210817002157-111344 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20210817002157-111344 --name bridge-20210817002157-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20210817002157-111344 --network bridge-20210817002157-111344 --ip 192.168.58.2 --volume bridge-20210817002157-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: exit status 125
	stdout:
	823cfd48f5b9d35d4ffd8b0aa1d97ea216bb1105638747a72311da2cb02ecc97
	
	stderr:
	docker: Error response from daemon: network bridge-20210817002157-111344 not found.
	
	I0817 00:53:36.203448   53120 start.go:536] Will try again in 5 seconds ...
	I0817 00:53:41.204344   53120 start.go:313] acquiring machines lock for bridge-20210817002157-111344: {Name:mkb9ba5241d5e0c05fd26c132158e9474415db2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 00:53:41.204753   53120 start.go:317] acquired machines lock for "bridge-20210817002157-111344" in 408.7µs
	I0817 00:53:41.204907   53120 start.go:93] Skipping create...Using existing machine configuration
	I0817 00:53:41.205032   53120 fix.go:55] fixHost starting: 
	I0817 00:53:41.217208   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:53:41.673200   53120 fix.go:108] recreateIfNeeded on bridge-20210817002157-111344: state= err=<nil>
	I0817 00:53:41.673200   53120 fix.go:113] machineExists: false. err=machine does not exist
	I0817 00:53:41.676038   53120 out.go:177] * docker "bridge-20210817002157-111344" container is missing, will recreate.
	I0817 00:53:41.676282   53120 delete.go:124] DEMOLISHING bridge-20210817002157-111344 ...
	I0817 00:53:41.689351   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:53:42.183216   53120 stop.go:79] host is in state 
	I0817 00:53:42.183560   53120 main.go:130] libmachine: Stopping "bridge-20210817002157-111344"...
	I0817 00:53:42.198042   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:53:42.679720   53120 kic_runner.go:94] Run: systemctl --version
	I0817 00:53:42.679720   53120 kic_runner.go:115] Args: [docker exec --privileged bridge-20210817002157-111344 systemctl --version]
	I0817 00:53:43.177010   53120 kic_runner.go:94] Run: sudo service kubelet stop
	I0817 00:53:43.177010   53120 kic_runner.go:115] Args: [docker exec --privileged bridge-20210817002157-111344 sudo service kubelet stop]
	I0817 00:53:43.658451   53120 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 823cfd48f5b9d35d4ffd8b0aa1d97ea216bb1105638747a72311da2cb02ecc97 is not running
	
	** /stderr **
	W0817 00:53:43.658451   53120 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 823cfd48f5b9d35d4ffd8b0aa1d97ea216bb1105638747a72311da2cb02ecc97 is not running
	I0817 00:53:43.672968   53120 kic_runner.go:94] Run: sudo service kubelet stop
	I0817 00:53:43.672968   53120 kic_runner.go:115] Args: [docker exec --privileged bridge-20210817002157-111344 sudo service kubelet stop]
	I0817 00:53:44.158308   53120 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 823cfd48f5b9d35d4ffd8b0aa1d97ea216bb1105638747a72311da2cb02ecc97 is not running
	
	** /stderr **
	W0817 00:53:44.158594   53120 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 823cfd48f5b9d35d4ffd8b0aa1d97ea216bb1105638747a72311da2cb02ecc97 is not running
	I0817 00:53:44.164040   53120 kic_runner.go:94] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0817 00:53:44.164040   53120 kic_runner.go:115] Args: [docker exec --privileged bridge-20210817002157-111344 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0817 00:53:44.660237   53120 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 823cfd48f5b9d35d4ffd8b0aa1d97ea216bb1105638747a72311da2cb02ecc97 is not running
	I0817 00:53:44.660237   53120 kic.go:466] successfully stopped kubernetes!
	I0817 00:53:44.678520   53120 kic_runner.go:94] Run: pgrep kube-apiserver
	I0817 00:53:44.678520   53120 kic_runner.go:115] Args: [docker exec --privileged bridge-20210817002157-111344 pgrep kube-apiserver]
	I0817 00:53:45.701845   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:53:49.200888   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:53:52.699064   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:53:56.180957   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:53:59.675994   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:03.165466   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:06.713865   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:10.260890   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:13.833153   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:17.395030   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:20.947091   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:24.495869   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:28.049607   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:31.598983   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:35.143195   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:38.651833   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:42.156573   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:45.672528   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:49.152315   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:52.649609   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:56.138897   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:54:59.632595   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:03.123107   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:06.597667   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:10.115069   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:13.633392   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:17.134179   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:20.619185   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:24.099314   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:27.615188   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:31.103437   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:34.597228   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:38.135098   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:41.640852   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:45.195259   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:48.756582   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:52.256884   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:55.733865   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:55:59.225795   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:02.692685   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:06.162818   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:09.645820   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:13.171837   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:16.657383   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:20.169428   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:23.673596   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:27.193345   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:30.695038   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:34.160931   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:37.617691   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:41.101838   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:44.552532   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:48.040750   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:51.503265   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:54.961840   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:56:58.422323   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:57:01.875379   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:57:05.350642   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:57:08.812451   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:57:12.262581   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:57:15.704803   53120 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0817 00:57:15.705263   53120 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0817 00:57:15.720619   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	W0817 00:57:16.171776   53120 delete.go:135] deletehost failed: Docker machine "bridge-20210817002157-111344" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 00:57:16.178593   53120 cli_runner.go:115] Run: docker container inspect -f {{.Id}} bridge-20210817002157-111344
	I0817 00:57:16.614203   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:57:17.072069   53120 cli_runner.go:115] Run: docker exec --privileged -t bridge-20210817002157-111344 /bin/bash -c "sudo init 0"
	W0817 00:57:17.568203   53120 cli_runner.go:162] docker exec --privileged -t bridge-20210817002157-111344 /bin/bash -c "sudo init 0" returned with exit code 1
	I0817 00:57:17.568592   53120 oci.go:632] error shutdown bridge-20210817002157-111344: docker exec --privileged -t bridge-20210817002157-111344 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 823cfd48f5b9d35d4ffd8b0aa1d97ea216bb1105638747a72311da2cb02ecc97 is not running
	I0817 00:57:18.578146   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:57:19.052457   53120 oci.go:646] temporary error: container bridge-20210817002157-111344 status is  but expect it to be exited
	I0817 00:57:19.052457   53120 oci.go:652] Successfully shutdown container bridge-20210817002157-111344
	I0817 00:57:19.060543   53120 cli_runner.go:115] Run: docker rm -f -v bridge-20210817002157-111344
	I0817 00:57:19.550384   53120 cli_runner.go:115] Run: docker container inspect -f {{.Id}} bridge-20210817002157-111344
	W0817 00:57:19.984288   53120 cli_runner.go:162] docker container inspect -f {{.Id}} bridge-20210817002157-111344 returned with exit code 1
	I0817 00:57:19.990958   53120 cli_runner.go:115] Run: docker network inspect bridge-20210817002157-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 00:57:20.431510   53120 cli_runner.go:162] docker network inspect bridge-20210817002157-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 00:57:20.438242   53120 network_create.go:255] running [docker network inspect bridge-20210817002157-111344] to gather additional debugging logs...
	I0817 00:57:20.438242   53120 cli_runner.go:115] Run: docker network inspect bridge-20210817002157-111344
	W0817 00:57:20.890529   53120 cli_runner.go:162] docker network inspect bridge-20210817002157-111344 returned with exit code 1
	I0817 00:57:20.890529   53120 network_create.go:258] error running [docker network inspect bridge-20210817002157-111344]: docker network inspect bridge-20210817002157-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20210817002157-111344
	I0817 00:57:20.890529   53120 network_create.go:260] output of [docker network inspect bridge-20210817002157-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20210817002157-111344
	
	** /stderr **
	W0817 00:57:20.892164   53120 delete.go:139] delete failed (probably ok) <nil>
	I0817 00:57:20.892164   53120 fix.go:120] Sleeping 1 second for extra luck!
	I0817 00:57:21.892537   53120 start.go:126] createHost starting for "" (driver="docker")
	I0817 00:57:21.896365   53120 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0817 00:57:21.897070   53120 start.go:160] libmachine.API.Create for "bridge-20210817002157-111344" (driver="docker")
	I0817 00:57:21.897070   53120 client.go:168] LocalClient.Create starting
	I0817 00:57:21.898441   53120 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0817 00:57:21.898848   53120 main.go:130] libmachine: Decoding PEM data...
	I0817 00:57:21.899029   53120 main.go:130] libmachine: Parsing certificate...
	I0817 00:57:21.899791   53120 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0817 00:57:21.900059   53120 main.go:130] libmachine: Decoding PEM data...
	I0817 00:57:21.900164   53120 main.go:130] libmachine: Parsing certificate...
	I0817 00:57:21.907844   53120 cli_runner.go:115] Run: docker network inspect bridge-20210817002157-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 00:57:22.352870   53120 cli_runner.go:162] docker network inspect bridge-20210817002157-111344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 00:57:22.359399   53120 network_create.go:255] running [docker network inspect bridge-20210817002157-111344] to gather additional debugging logs...
	I0817 00:57:22.359399   53120 cli_runner.go:115] Run: docker network inspect bridge-20210817002157-111344
	W0817 00:57:22.809650   53120 cli_runner.go:162] docker network inspect bridge-20210817002157-111344 returned with exit code 1
	I0817 00:57:22.809650   53120 network_create.go:258] error running [docker network inspect bridge-20210817002157-111344]: docker network inspect bridge-20210817002157-111344: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20210817002157-111344
	I0817 00:57:22.809650   53120 network_create.go:260] output of [docker network inspect bridge-20210817002157-111344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20210817002157-111344
	
	** /stderr **
	I0817 00:57:22.816410   53120 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 00:57:23.283908   53120 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00077c260] amended:true}} dirty:map[192.168.49.0:0xc00077c260 192.168.58.0:0xc00076c180] misses:0}
	I0817 00:57:23.283908   53120 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:57:23.284461   53120 network_create.go:106] attempt to create docker network bridge-20210817002157-111344 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 00:57:23.290461   53120 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20210817002157-111344
	W0817 00:57:23.722705   53120 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20210817002157-111344 returned with exit code 1
	W0817 00:57:23.722705   53120 network_create.go:98] failed to create docker network bridge-20210817002157-111344 192.168.49.0/24, will retry: subnet is taken
	I0817 00:57:23.731142   53120 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00077c260] amended:true}} dirty:map[192.168.49.0:0xc00077c260 192.168.58.0:0xc00076c180] misses:0}
	I0817 00:57:23.731142   53120 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:57:23.738414   53120 network.go:284] reusing subnet 192.168.58.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00077c260] amended:true}} dirty:map[192.168.49.0:0xc00077c260 192.168.58.0:0xc00076c180] misses:1}
	I0817 00:57:23.738414   53120 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0817 00:57:23.738414   53120 network_create.go:106] attempt to create docker network bridge-20210817002157-111344 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0817 00:57:23.745209   53120 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20210817002157-111344
	I0817 00:57:24.358432   53120 network_create.go:90] docker network bridge-20210817002157-111344 192.168.58.0/24 created
	I0817 00:57:24.358779   53120 kic.go:106] calculated static IP "192.168.58.2" for the "bridge-20210817002157-111344" container
	I0817 00:57:24.374427   53120 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0817 00:57:24.815154   53120 cli_runner.go:115] Run: docker volume create bridge-20210817002157-111344 --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --label created_by.minikube.sigs.k8s.io=true
	I0817 00:57:25.241030   53120 oci.go:102] Successfully created a docker volume bridge-20210817002157-111344
	I0817 00:57:25.246677   53120 cli_runner.go:115] Run: docker run --rm --name bridge-20210817002157-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --entrypoint /usr/bin/test -v bridge-20210817002157-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib
	I0817 00:57:27.116422   53120 cli_runner.go:168] Completed: docker run --rm --name bridge-20210817002157-111344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --entrypoint /usr/bin/test -v bridge-20210817002157-111344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -d /var/lib: (1.8692942s)
	I0817 00:57:27.116917   53120 oci.go:106] Successfully prepared a docker volume bridge-20210817002157-111344
	I0817 00:57:27.117143   53120 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:57:27.117301   53120 kic.go:179] Starting extracting preloaded images to volume ...
	I0817 00:57:27.124651   53120 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0817 00:57:27.124651   53120 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20210817002157-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir
	W0817 00:57:27.628160   53120 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20210817002157-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0817 00:57:27.628709   53120 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20210817002157-111344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ������������������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	����   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	��������8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>�����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0817 00:57:27.890946   53120 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2021-08-17 00:57:27.5451355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0817 00:57:27.899013   53120 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 00:57:28.611142   53120 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20210817002157-111344 --name bridge-20210817002157-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20210817002157-111344 --network bridge-20210817002157-111344 --ip 192.168.58.2 --volume bridge-20210817002157-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6
	I0817 00:57:30.291355   53120 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20210817002157-111344 --name bridge-20210817002157-111344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210817002157-111344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20210817002157-111344 --network bridge-20210817002157-111344 --ip 192.168.58.2 --volume bridge-20210817002157-111344:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6: (1.6801485s)
	I0817 00:57:30.298856   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Running}}
	I0817 00:57:30.758393   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:57:31.243737   53120 cli_runner.go:115] Run: docker exec bridge-20210817002157-111344 stat /var/lib/dpkg/alternatives/iptables
	I0817 00:57:31.911876   53120 oci.go:278] the created container "bridge-20210817002157-111344" has a running status.
	I0817 00:57:31.912136   53120 kic.go:210] Creating ssh key for kic: C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa...
	I0817 00:57:32.026109   53120 kic_runner.go:188] docker (temp): C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 00:57:32.789466   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:57:33.240392   53120 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 00:57:33.240392   53120 kic_runner.go:115] Args: [docker exec --privileged bridge-20210817002157-111344 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 00:57:33.841920   53120 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa...
	I0817 00:57:34.457167   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:57:34.880711   53120 machine.go:88] provisioning docker machine ...
	I0817 00:57:34.881044   53120 ubuntu.go:169] provisioning hostname "bridge-20210817002157-111344"
	I0817 00:57:34.887804   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:35.317531   53120 main.go:130] libmachine: Using SSH client type: native
	I0817 00:57:35.327026   53120 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55258 <nil> <nil>}
	I0817 00:57:35.327026   53120 main.go:130] libmachine: About to run SSH command:
	sudo hostname bridge-20210817002157-111344 && echo "bridge-20210817002157-111344" | sudo tee /etc/hostname
	I0817 00:57:35.582901   53120 main.go:130] libmachine: SSH cmd err, output: <nil>: bridge-20210817002157-111344
	
	I0817 00:57:35.589423   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:36.043307   53120 main.go:130] libmachine: Using SSH client type: native
	I0817 00:57:36.043896   53120 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55258 <nil> <nil>}
	I0817 00:57:36.043896   53120 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-20210817002157-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-20210817002157-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-20210817002157-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 00:57:36.274551   53120 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:57:36.274790   53120 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0817 00:57:36.274790   53120 ubuntu.go:177] setting up certificates
	I0817 00:57:36.274925   53120 provision.go:83] configureAuth start
	I0817 00:57:36.280935   53120 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20210817002157-111344
	I0817 00:57:36.723800   53120 provision.go:138] copyHostCerts
	I0817 00:57:36.724729   53120 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0817 00:57:36.724729   53120 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0817 00:57:36.725284   53120 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0817 00:57:36.727008   53120 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0817 00:57:36.727008   53120 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0817 00:57:36.727460   53120 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0817 00:57:36.728650   53120 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0817 00:57:36.728650   53120 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0817 00:57:36.728881   53120 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0817 00:57:36.729608   53120 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.bridge-20210817002157-111344 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube bridge-20210817002157-111344]
	I0817 00:57:36.923980   53120 provision.go:172] copyRemoteCerts
	I0817 00:57:36.931203   53120 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 00:57:36.936073   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:37.378093   53120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55258 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa Username:docker}
	I0817 00:57:37.518870   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 00:57:37.570165   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 00:57:37.632345   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 00:57:37.704663   53120 provision.go:86] duration metric: configureAuth took 1.4296838s
	I0817 00:57:37.704876   53120 ubuntu.go:193] setting minikube options for container-runtime
	I0817 00:57:37.705428   53120 config.go:177] Loaded profile config "bridge-20210817002157-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:57:37.711802   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:38.142400   53120 main.go:130] libmachine: Using SSH client type: native
	I0817 00:57:38.142820   53120 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55258 <nil> <nil>}
	I0817 00:57:38.142943   53120 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0817 00:57:38.354377   53120 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0817 00:57:38.354377   53120 ubuntu.go:71] root file system type: overlay
	I0817 00:57:38.355159   53120 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0817 00:57:38.361197   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:38.791093   53120 main.go:130] libmachine: Using SSH client type: native
	I0817 00:57:38.791603   53120 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55258 <nil> <nil>}
	I0817 00:57:38.791870   53120 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0817 00:57:39.024962   53120 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0817 00:57:39.030794   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:39.482194   53120 main.go:130] libmachine: Using SSH client type: native
	I0817 00:57:39.482681   53120 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55258 <nil> <nil>}
	I0817 00:57:39.482909   53120 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:57:41.208912   53120 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-17 00:57:39.018168000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0817 00:57:41.208912   53120 machine.go:91] provisioned docker machine in 6.3279601s
	I0817 00:57:41.208912   53120 client.go:171] LocalClient.Create took 19.3109342s
	I0817 00:57:41.209149   53120 start.go:168] duration metric: libmachine.API.Create for "bridge-20210817002157-111344" took 19.3111084s
	I0817 00:57:41.209149   53120 start.go:267] post-start starting for "bridge-20210817002157-111344" (driver="docker")
	I0817 00:57:41.209149   53120 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:57:41.216794   53120 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:57:41.222187   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:41.659083   53120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55258 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa Username:docker}
	I0817 00:57:41.813380   53120 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:57:41.828671   53120 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:57:41.828671   53120 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:57:41.828671   53120 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:57:41.828671   53120 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 00:57:41.828671   53120 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:57:41.829097   53120 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:57:41.829928   53120 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:57:41.838027   53120 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:57:41.862714   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:57:41.917727   53120 start.go:270] post-start completed in 708.5513ms
	I0817 00:57:41.929482   53120 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20210817002157-111344
	I0817 00:57:42.373125   53120 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\config.json ...
	I0817 00:57:42.384792   53120 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:57:42.392194   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:42.824679   53120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55258 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa Username:docker}
	I0817 00:57:42.957231   53120 start.go:129] duration metric: createHost completed in 21.0637707s
	I0817 00:57:42.974401   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	W0817 00:57:43.411555   53120 fix.go:134] unexpected machine state, will restart: <nil>
	I0817 00:57:43.411555   53120 machine.go:88] provisioning docker machine ...
	I0817 00:57:43.411555   53120 ubuntu.go:169] provisioning hostname "bridge-20210817002157-111344"
	I0817 00:57:43.417651   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:43.867434   53120 main.go:130] libmachine: Using SSH client type: native
	I0817 00:57:43.868132   53120 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55258 <nil> <nil>}
	I0817 00:57:43.868132   53120 main.go:130] libmachine: About to run SSH command:
	sudo hostname bridge-20210817002157-111344 && echo "bridge-20210817002157-111344" | sudo tee /etc/hostname
	I0817 00:57:44.114667   53120 main.go:130] libmachine: SSH cmd err, output: <nil>: bridge-20210817002157-111344
	
	I0817 00:57:44.126534   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:44.574084   53120 main.go:130] libmachine: Using SSH client type: native
	I0817 00:57:44.574649   53120 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55258 <nil> <nil>}
	I0817 00:57:44.574824   53120 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-20210817002157-111344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-20210817002157-111344/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-20210817002157-111344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 00:57:44.786142   53120 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:57:44.786142   53120 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0817 00:57:44.786142   53120 ubuntu.go:177] setting up certificates
	I0817 00:57:44.786142   53120 provision.go:83] configureAuth start
	I0817 00:57:44.792218   53120 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20210817002157-111344
	I0817 00:57:45.239869   53120 provision.go:138] copyHostCerts
	I0817 00:57:45.240402   53120 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0817 00:57:45.240402   53120 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0817 00:57:45.240836   53120 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0817 00:57:45.242361   53120 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0817 00:57:45.242361   53120 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0817 00:57:45.242921   53120 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0817 00:57:45.244356   53120 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0817 00:57:45.244356   53120 exec_runner.go:190] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0817 00:57:45.244771   53120 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1679 bytes)
	I0817 00:57:45.245969   53120 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.bridge-20210817002157-111344 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube bridge-20210817002157-111344]
	I0817 00:57:45.508549   53120 provision.go:172] copyRemoteCerts
	I0817 00:57:45.517278   53120 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 00:57:45.523275   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:45.970696   53120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55258 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa Username:docker}
	I0817 00:57:46.122219   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 00:57:46.183865   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0817 00:57:46.241759   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0817 00:57:46.301427   53120 provision.go:86] duration metric: configureAuth took 1.5152272s
	I0817 00:57:46.301427   53120 ubuntu.go:193] setting minikube options for container-runtime
	I0817 00:57:46.301710   53120 config.go:177] Loaded profile config "bridge-20210817002157-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:57:46.308926   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:46.794272   53120 main.go:130] libmachine: Using SSH client type: native
	I0817 00:57:46.795267   53120 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55258 <nil> <nil>}
	I0817 00:57:46.795267   53120 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0817 00:57:47.038550   53120 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0817 00:57:47.038631   53120 ubuntu.go:71] root file system type: overlay
	I0817 00:57:47.039081   53120 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0817 00:57:47.044372   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:47.495311   53120 main.go:130] libmachine: Using SSH client type: native
	I0817 00:57:47.496106   53120 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55258 <nil> <nil>}
	I0817 00:57:47.496230   53120 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0817 00:57:47.755513   53120 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0817 00:57:47.765852   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:48.249028   53120 main.go:130] libmachine: Using SSH client type: native
	I0817 00:57:48.249401   53120 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x4c95a0] 0x4c9560 <nil>  [] 0s} 127.0.0.1 55258 <nil> <nil>}
	I0817 00:57:48.249521   53120 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0817 00:57:48.488213   53120 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0817 00:57:48.488213   53120 machine.go:91] provisioned docker machine in 5.0764644s
	I0817 00:57:48.488543   53120 start.go:267] post-start starting for "bridge-20210817002157-111344" (driver="docker")
	I0817 00:57:48.488543   53120 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 00:57:48.496132   53120 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 00:57:48.504319   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:48.954360   53120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55258 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa Username:docker}
	I0817 00:57:49.128516   53120 ssh_runner.go:149] Run: cat /etc/os-release
	I0817 00:57:49.149795   53120 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 00:57:49.149795   53120 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 00:57:49.149795   53120 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 00:57:49.149795   53120 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0817 00:57:49.149795   53120 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0817 00:57:49.150305   53120 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0817 00:57:49.150783   53120 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem -> 1113442.pem in /etc/ssl/certs
	I0817 00:57:49.160298   53120 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0817 00:57:49.192855   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /etc/ssl/certs/1113442.pem (1708 bytes)
	I0817 00:57:49.257951   53120 start.go:270] post-start completed in 769.3783ms
	I0817 00:57:49.265929   53120 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 00:57:49.272419   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:49.738003   53120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55258 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa Username:docker}
	I0817 00:57:49.894756   53120 fix.go:57] fixHost completed within 4m8.6802692s
	I0817 00:57:49.894756   53120 start.go:80] releasing machines lock for "bridge-20210817002157-111344", held for 4m8.6805483s
	I0817 00:57:49.901953   53120 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20210817002157-111344
	I0817 00:57:50.353729   53120 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0817 00:57:50.362132   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:50.362661   53120 ssh_runner.go:149] Run: sudo service containerd status
	I0817 00:57:50.369632   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:50.824482   53120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55258 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa Username:docker}
	I0817 00:57:50.838132   53120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55258 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa Username:docker}
	I0817 00:57:51.113335   53120 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:57:51.149020   53120 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0817 00:57:51.161760   53120 ssh_runner.go:149] Run: sudo service crio status
	I0817 00:57:51.226201   53120 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 00:57:51.282443   53120 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0817 00:57:51.330187   53120 ssh_runner.go:149] Run: sudo service docker status
	I0817 00:57:51.393714   53120 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:57:51.595322   53120 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0817 00:57:51.752816   53120 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.8 ...
	I0817 00:57:51.768054   53120 cli_runner.go:115] Run: docker exec -t bridge-20210817002157-111344 dig +short host.docker.internal
	I0817 00:57:52.490808   53120 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
	I0817 00:57:52.499591   53120 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0817 00:57:52.519741   53120 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:57:52.566409   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:57:53.032465   53120 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0817 00:57:53.040464   53120 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:57:53.144907   53120 docker.go:535] Got preloaded images: 
	I0817 00:57:53.144907   53120 docker.go:541] k8s.gcr.io/kube-apiserver:v1.21.3 wasn't preloaded
	I0817 00:57:53.152987   53120 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:57:53.188505   53120 ssh_runner.go:149] Run: which lz4
	I0817 00:57:53.227555   53120 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0817 00:57:53.252046   53120 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0817 00:57:53.252868   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (504826016 bytes)
	I0817 00:58:30.673327   53120 docker.go:500] Took 37.460538 seconds to copy over tarball
	I0817 00:58:30.680889   53120 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 00:58:41.962506   53120 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.2801012s)
	I0817 00:58:41.962621   53120 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0817 00:58:42.290512   53120 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0817 00:58:42.317281   53120 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3152 bytes)
	I0817 00:58:42.364299   53120 ssh_runner.go:149] Run: sudo service docker restart
	I0817 00:58:44.493822   53120 ssh_runner.go:189] Completed: sudo service docker restart: (2.1294417s)
	I0817 00:58:44.493822   53120 openrc.go:152] restart output: 
	I0817 00:58:44.501319   53120 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0817 00:58:44.609862   53120 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0817 00:58:44.610117   53120 cache_images.go:74] Images are preloaded, skipping loading
	I0817 00:58:44.616793   53120 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0817 00:58:44.887260   53120 cni.go:93] Creating CNI manager for "bridge"
	I0817 00:58:44.887579   53120 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 00:58:44.887579   53120 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-20210817002157-111344 NodeName:bridge-20210817002157-111344 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0817 00:58:44.888408   53120 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "bridge-20210817002157-111344"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 00:58:44.888918   53120 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=bridge-20210817002157-111344 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:bridge-20210817002157-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0817 00:58:44.900050   53120 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0817 00:58:44.924958   53120 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 00:58:44.933231   53120 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0817 00:58:44.954928   53120 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0817 00:58:44.993140   53120 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 00:58:45.026509   53120 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I0817 00:58:45.061164   53120 ssh_runner.go:316] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0817 00:58:45.099911   53120 ssh_runner.go:316] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0817 00:58:45.143009   53120 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0817 00:58:45.174731   53120 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 00:58:45.204886   53120 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344 for IP: 192.168.58.2
	I0817 00:58:45.204886   53120 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0817 00:58:45.204886   53120 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0817 00:58:45.204886   53120 certs.go:297] generating minikube-user signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\client.key
	I0817 00:58:45.204886   53120 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\client.crt with IP's: []
	I0817 00:58:45.481973   53120 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\client.crt ...
	I0817 00:58:45.481973   53120 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\client.crt: {Name:mk986e2fefaeb4fed46ab2f3aabe0e09f523402a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:58:45.483572   53120 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\client.key ...
	I0817 00:58:45.483572   53120 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\client.key: {Name:mk826b810cd1b26efd65c389298828b3709d80fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:58:45.485727   53120 certs.go:297] generating minikube signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.key.cee25041
	I0817 00:58:45.485727   53120 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 00:58:45.711017   53120 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.crt.cee25041 ...
	I0817 00:58:45.711017   53120 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.crt.cee25041: {Name:mka2e1601999ed4ce4f794689f877dcf6260dce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:58:45.712480   53120 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.key.cee25041 ...
	I0817 00:58:45.712480   53120 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.key.cee25041: {Name:mk0ab7bb1b411c00054f943f339a602a88b3547f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:58:45.714471   53120 certs.go:308] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.crt.cee25041 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.crt
	I0817 00:58:45.722405   53120 certs.go:312] copying C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.key.cee25041 -> C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.key
	I0817 00:58:45.723030   53120 certs.go:297] generating aggregator signed cert: C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\proxy-client.key
	I0817 00:58:45.723030   53120 crypto.go:69] Generating cert C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\proxy-client.crt with IP's: []
	I0817 00:58:45.865779   53120 crypto.go:157] Writing cert to C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\proxy-client.crt ...
	I0817 00:58:45.865779   53120 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\proxy-client.crt: {Name:mk68632ad3b7baec638cf390ce1713823e1f2850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:58:45.867967   53120 crypto.go:165] Writing key to C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\proxy-client.key ...
	I0817 00:58:45.867967   53120 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\proxy-client.key: {Name:mkcf720db4293ad458573ecd20e6ec2667858163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:58:45.875932   53120 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem (1338 bytes)
	W0817 00:58:45.876939   53120 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\111344_empty.pem, impossibly tiny 0 bytes
	I0817 00:58:45.876939   53120 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0817 00:58:45.876939   53120 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0817 00:58:45.876939   53120 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0817 00:58:45.877940   53120 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0817 00:58:45.877940   53120 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem (1708 bytes)
	I0817 00:58:45.880375   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 00:58:45.940553   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 00:58:46.006805   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 00:58:46.063273   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\bridge-20210817002157-111344\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 00:58:46.120723   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 00:58:46.193951   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 00:58:46.253821   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 00:58:46.315522   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0817 00:58:46.376760   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 00:58:46.428802   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\certs\111344.pem --> /usr/share/ca-certificates/111344.pem (1338 bytes)
	I0817 00:58:46.484289   53120 ssh_runner.go:316] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\1113442.pem --> /usr/share/ca-certificates/1113442.pem (1708 bytes)
	I0817 00:58:46.543465   53120 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 00:58:46.603172   53120 ssh_runner.go:149] Run: openssl version
	I0817 00:58:46.633703   53120 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1113442.pem && ln -fs /usr/share/ca-certificates/1113442.pem /etc/ssl/certs/1113442.pem"
	I0817 00:58:46.689399   53120 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1113442.pem
	I0817 00:58:46.719012   53120 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 23:23 /usr/share/ca-certificates/1113442.pem
	I0817 00:58:46.728183   53120 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1113442.pem
	I0817 00:58:46.757098   53120 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1113442.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 00:58:46.798039   53120 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 00:58:46.839407   53120 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:58:46.855264   53120 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 23:12 /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:58:46.867908   53120 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 00:58:46.899184   53120 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 00:58:46.949106   53120 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111344.pem && ln -fs /usr/share/ca-certificates/111344.pem /etc/ssl/certs/111344.pem"
	I0817 00:58:46.989064   53120 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/111344.pem
	I0817 00:58:47.013920   53120 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 23:23 /usr/share/ca-certificates/111344.pem
	I0817 00:58:47.023908   53120 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111344.pem
	I0817 00:58:47.059789   53120 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/111344.pem /etc/ssl/certs/51391683.0"
	I0817 00:58:47.089140   53120 kubeadm.go:390] StartCluster: {Name:bridge-20210817002157-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:bridge-20210817002157-111344 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0817 00:58:47.101655   53120 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0817 00:58:47.224501   53120 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 00:58:47.260675   53120 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 00:58:47.290762   53120 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0817 00:58:47.298728   53120 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 00:58:47.323601   53120 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 00:58:47.323862   53120 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 00:59:15.328619   53120 out.go:204]   - Generating certificates and keys ...
	I0817 00:59:15.333453   53120 out.go:204]   - Booting up control plane ...
	I0817 00:59:15.337168   53120 out.go:204]   - Configuring RBAC rules ...
	I0817 00:59:15.342307   53120 cni.go:93] Creating CNI manager for "bridge"
	I0817 00:59:15.344013   53120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0817 00:59:15.354553   53120 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0817 00:59:15.382786   53120 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0817 00:59:15.460570   53120 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 00:59:15.472990   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:15.472990   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=bridge-20210817002157-111344 minikube.k8s.io/updated_at=2021_08_17T00_59_15_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:15.578269   53120 ops.go:34] apiserver oom_adj: -16
	I0817 00:59:16.974549   53120 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.5015026s)
	I0817 00:59:16.974549   53120 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=bridge-20210817002157-111344 minikube.k8s.io/updated_at=2021_08_17T00_59_15_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.5015026s)
	I0817 00:59:16.983837   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:18.352152   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:18.854904   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:19.359971   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:19.855875   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:20.358108   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:20.859723   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:21.351969   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:21.860312   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:22.358257   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:22.856214   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:23.361531   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:23.864584   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:24.378281   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:24.856836   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:25.356745   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:25.854621   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:26.360113   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:26.862424   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:27.361564   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:28.359368   53120 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 00:59:29.068161   53120 kubeadm.go:985] duration metric: took 13.6070488s to wait for elevateKubeSystemPrivileges.
	I0817 00:59:29.068310   53120 kubeadm.go:392] StartCluster complete in 41.9775493s
	I0817 00:59:29.068310   53120 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:59:29.068894   53120 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0817 00:59:29.075011   53120 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 00:59:29.697373   53120 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "bridge-20210817002157-111344" rescaled to 1
	I0817 00:59:29.697622   53120 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0817 00:59:29.700195   53120 out.go:177] * Verifying Kubernetes components...
	I0817 00:59:29.697622   53120 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 00:59:29.697763   53120 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0817 00:59:29.698256   53120 config.go:177] Loaded profile config "bridge-20210817002157-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0817 00:59:29.700493   53120 addons.go:59] Setting storage-provisioner=true in profile "bridge-20210817002157-111344"
	I0817 00:59:29.700493   53120 addons.go:59] Setting default-storageclass=true in profile "bridge-20210817002157-111344"
	I0817 00:59:29.700493   53120 addons.go:135] Setting addon storage-provisioner=true in "bridge-20210817002157-111344"
	W0817 00:59:29.700493   53120 addons.go:147] addon storage-provisioner should already be in state true
	I0817 00:59:29.700493   53120 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-20210817002157-111344"
	I0817 00:59:29.700890   53120 host.go:66] Checking if "bridge-20210817002157-111344" exists ...
	I0817 00:59:29.710106   53120 ssh_runner.go:149] Run: sudo service kubelet status
	I0817 00:59:29.716075   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:59:29.721257   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:59:29.989269   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:59:30.225141   53120 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 00:59:30.225241   53120 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:59:30.225559   53120 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 00:59:30.234178   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:59:30.335347   53120 addons.go:135] Setting addon default-storageclass=true in "bridge-20210817002157-111344"
	W0817 00:59:30.335347   53120 addons.go:147] addon default-storageclass should already be in state true
	I0817 00:59:30.335600   53120 host.go:66] Checking if "bridge-20210817002157-111344" exists ...
	I0817 00:59:30.348862   53120 cli_runner.go:115] Run: docker container inspect bridge-20210817002157-111344 --format={{.State.Status}}
	I0817 00:59:30.569682   53120 node_ready.go:35] waiting up to 5m0s for node "bridge-20210817002157-111344" to be "Ready" ...
	I0817 00:59:30.588544   53120 node_ready.go:49] node "bridge-20210817002157-111344" has status "Ready":"True"
	I0817 00:59:30.588687   53120 node_ready.go:38] duration metric: took 19.0043ms waiting for node "bridge-20210817002157-111344" to be "Ready" ...
	I0817 00:59:30.588687   53120 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 00:59:30.672146   53120 pod_ready.go:78] waiting up to 5m0s for pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace to be "Ready" ...
	I0817 00:59:30.770458   53120 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.0702226s)
	I0817 00:59:30.770824   53120 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 00:59:30.786889   53120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55258 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa Username:docker}
	I0817 00:59:30.863406   53120 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 00:59:30.863406   53120 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 00:59:30.870222   53120 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210817002157-111344
	I0817 00:59:31.329476   53120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55258 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\bridge-20210817002157-111344\id_rsa Username:docker}
	I0817 00:59:31.699001   53120 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 00:59:32.342405   53120 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 00:59:32.750317   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:33.478608   53120 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.7067847s)
	I0817 00:59:33.478608   53120 start.go:728] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0817 00:59:34.271648   53120 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.5715009s)
	I0817 00:59:34.271767   53120 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.9291695s)
	I0817 00:59:34.278133   53120 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0817 00:59:34.278133   53120 addons.go:344] enableAddons completed in 4.5801966s
	I0817 00:59:35.232729   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:37.735088   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:40.232505   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:42.732281   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:45.238745   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:47.727548   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:49.730512   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:51.731583   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:53.744324   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:56.233101   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 00:59:58.236392   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:00.256434   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:02.731915   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:04.733464   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:06.738166   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:09.230166   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:11.233007   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:13.732114   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:15.734132   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:18.240147   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:20.736554   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:22.747661   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:24.755431   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:27.234856   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:29.239284   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:31.735835   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:34.234707   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:36.733481   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:39.232136   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:41.731039   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:43.732475   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:45.744608   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:48.232672   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:50.234502   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:52.235838   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:54.237445   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:56.738596   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:00:59.232628   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:01.732509   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:03.733734   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:05.734301   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:08.233415   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:10.732530   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:13.253195   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:15.733858   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:17.734658   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:20.233386   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:22.241689   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:24.737168   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:26.737860   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:29.244175   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:31.733751   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:34.236817   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:36.737493   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:39.233818   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:41.736139   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:44.234643   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:46.234984   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:48.236146   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:50.246416   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:52.840134   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"
	I0817 01:01:55.237374   53120 pod_ready.go:102] pod "coredns-558bd4d5db-jtcm5" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 1
--- FAIL: TestNetworkPlugins/group/bridge/Start (521.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (319.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
E0817 01:00:18.651642  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 01:00:20.410823  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5488982s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
E0817 01:00:28.775278  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.549985s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4769558s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5399257s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0817 01:01:19.860998  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
E0817 01:01:23.546106  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4364939s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0817 01:01:39.371532  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:39.377721  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:39.389779  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:39.410834  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:39.452955  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:39.535099  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:39.695743  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:40.016549  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:40.659244  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:41.743545  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
E0817 01:01:41.941490  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:44.239640  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0817 01:01:44.502557  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:49.623222  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:01:54.746400  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4593552s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0817 01:02:20.347291  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:02:22.833474  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0817 01:03:01.310738  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0817 01:04:09.230071  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0817 01:04:23.236965  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\enable-default-cni-20210817002157-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0817 01:04:56.805995  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 01:05:03.565278  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 01:05:18.663133  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 01:05:20.422074  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 01:05:28.787311  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context kubenet-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
net_test.go:168: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:173: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (319.21s)

                                                
                                    

Test pass (215/249)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 21.03
4 TestDownloadOnly/v1.14.0/preload-exists 0
7 TestDownloadOnly/v1.14.0/kubectl 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.56
10 TestDownloadOnly/v1.21.3/json-events 13.64
11 TestDownloadOnly/v1.21.3/preload-exists 0.13
14 TestDownloadOnly/v1.21.3/kubectl 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.42
17 TestDownloadOnly/v1.22.0-rc.0/json-events 16.59
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0.03
21 TestDownloadOnly/v1.22.0-rc.0/kubectl 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.43
23 TestDownloadOnly/DeleteAll 5.26
24 TestDownloadOnly/DeleteAlwaysSucceeds 3.62
25 TestDownloadOnlyKic 42.36
26 TestOffline 298.79
30 TestAddons/parallel/Ingress 88.75
31 TestAddons/parallel/MetricsServer 9.54
32 TestAddons/parallel/HelmTiller 80.61
33 TestAddons/parallel/Olm 197.9
34 TestAddons/parallel/CSI 136.49
37 TestDockerFlags 210.21
38 TestForceSystemdFlag 205.62
39 TestForceSystemdEnv 236.3
44 TestErrorSpam/setup 98.81
45 TestErrorSpam/start 10.84
46 TestErrorSpam/status 11.21
47 TestErrorSpam/pause 10.67
48 TestErrorSpam/unpause 10.81
49 TestErrorSpam/stop 21.37
52 TestFunctional/serial/CopySyncFile 0.04
53 TestFunctional/serial/StartWithProxy 140.13
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 19.53
56 TestFunctional/serial/KubeContext 0.15
57 TestFunctional/serial/KubectlGetPods 0.42
60 TestFunctional/serial/CacheCmd/cache/add_remote 15.94
61 TestFunctional/serial/CacheCmd/cache/add_local 6.17
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.32
63 TestFunctional/serial/CacheCmd/cache/list 0.33
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 3.56
65 TestFunctional/serial/CacheCmd/cache/cache_reload 15.19
66 TestFunctional/serial/CacheCmd/cache/delete 0.62
67 TestFunctional/serial/MinikubeKubectlCmd 2.25
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.84
69 TestFunctional/serial/ExtraConfig 102.17
70 TestFunctional/serial/ComponentHealth 0.22
71 TestFunctional/serial/LogsCmd 5.88
72 TestFunctional/serial/LogsFileCmd 5.6
74 TestFunctional/parallel/ConfigCmd 2.09
76 TestFunctional/parallel/DryRun 6.87
77 TestFunctional/parallel/InternationalLanguage 2.91
78 TestFunctional/parallel/StatusCmd 12.6
82 TestFunctional/parallel/AddonsCmd 2.5
83 TestFunctional/parallel/PersistentVolumeClaim 69.22
85 TestFunctional/parallel/SSHCmd 7.9
86 TestFunctional/parallel/CpCmd 7.89
87 TestFunctional/parallel/MySQL 60.26
88 TestFunctional/parallel/FileSync 4.08
89 TestFunctional/parallel/CertSync 23.48
91 TestFunctional/parallel/DockerEnv 15.29
93 TestFunctional/parallel/NodeLabels 0.2
94 TestFunctional/parallel/LoadImage 11.78
95 TestFunctional/parallel/RemoveImage 16.11
96 TestFunctional/parallel/LoadImageFromFile 13.75
97 TestFunctional/parallel/BuildImage 14.76
98 TestFunctional/parallel/ListImages 3.34
99 TestFunctional/parallel/NonActiveRuntimeDisabled 4.23
101 TestFunctional/parallel/Version/short 0.35
102 TestFunctional/parallel/Version/components 4.59
103 TestFunctional/parallel/UpdateContextCmd/no_changes 2.58
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.54
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.53
106 TestFunctional/parallel/ProfileCmd/profile_not_create 6.44
107 TestFunctional/parallel/ProfileCmd/profile_list 4.43
108 TestFunctional/parallel/ProfileCmd/profile_json_output 4.33
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.22
117 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
118 TestFunctional/delete_busybox_image 1.04
119 TestFunctional/delete_my-image_image 0.47
120 TestFunctional/delete_minikube_cached_images 0.44
124 TestJSONOutput/start/Audit 0
126 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
127 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
129 TestJSONOutput/pause/Audit 0
131 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
132 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
134 TestJSONOutput/unpause/Audit 0
136 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
137 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
139 TestJSONOutput/stop/Audit 0
141 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
142 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
143 TestErrorJSONOutput 3.96
145 TestKicCustomNetwork/create_custom_network 110.32
146 TestKicCustomNetwork/use_default_bridge_network 108.28
147 TestKicExistingNetwork 107.59
148 TestMainNoArgs 0.31
151 TestMultiNode/serial/FreshStart2Nodes 224.51
152 TestMultiNode/serial/DeployApp2Nodes 25.42
153 TestMultiNode/serial/PingHostFrom2Pods 9.75
154 TestMultiNode/serial/AddNode 82.52
155 TestMultiNode/serial/ProfileList 3.65
156 TestMultiNode/serial/CopyFile 27.2
157 TestMultiNode/serial/StopNode 15.76
158 TestMultiNode/serial/StartAfterStop 39.79
159 TestMultiNode/serial/RestartKeepsNodes 283.69
160 TestMultiNode/serial/DeleteNode 23.5
161 TestMultiNode/serial/StopMultiNode 32.97
162 TestMultiNode/serial/RestartMultiNode 168.51
163 TestMultiNode/serial/ValidateNameConflict 125.38
168 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
169 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 0
171 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
172 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 0
174 TestDebPackageInstall/install_amd64_debian:10/minikube 0
175 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 0
177 TestDebPackageInstall/install_amd64_debian:9/minikube 0
178 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 0
180 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
181 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 0
183 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
184 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 0
186 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
187 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 0
189 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
190 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 0
191 TestPreload 240.84
192 TestScheduledStopWindows 140.6
194 TestSkaffold 174.95
197 TestRunningBinaryUpgrade 355.86
200 TestMissingContainerUpgrade 373.79
209 TestPause/serial/Start 248.22
211 TestPause/serial/SecondStartNoReconfiguration 49.21
212 TestPause/serial/Pause 5.71
213 TestPause/serial/VerifyStatus 4.32
214 TestPause/serial/Unpause 5.29
215 TestPause/serial/PauseAgain 5.46
216 TestPause/serial/DeletePaused 19.8
217 TestPause/serial/VerifyDeletedResources 16.72
230 TestStartStop/group/old-k8s-version/serial/FirstStart 258.39
232 TestStartStop/group/no-preload/serial/FirstStart 284.01
234 TestStartStop/group/embed-certs/serial/FirstStart 215.34
235 TestStartStop/group/old-k8s-version/serial/DeployApp 18.82
236 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 6.25
237 TestStartStop/group/old-k8s-version/serial/Stop 17.92
238 TestStartStop/group/embed-certs/serial/DeployApp 16.99
239 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 4.52
240 TestStartStop/group/old-k8s-version/serial/SecondStart 465.2
241 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 5.24
242 TestStartStop/group/no-preload/serial/DeployApp 14.44
243 TestStartStop/group/embed-certs/serial/Stop 17.77
245 TestStartStop/group/default-k8s-different-port/serial/FirstStart 174.91
246 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 4.88
247 TestStartStop/group/no-preload/serial/Stop 17.58
248 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 4.48
249 TestStartStop/group/embed-certs/serial/SecondStart 448.1
250 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 4.33
251 TestStartStop/group/no-preload/serial/SecondStart 475.76
252 TestStartStop/group/default-k8s-different-port/serial/DeployApp 13.38
253 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 5.26
254 TestStartStop/group/default-k8s-different-port/serial/Stop 16.06
255 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 4.17
256 TestStartStop/group/default-k8s-different-port/serial/SecondStart 441.19
257 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.14
258 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.62
259 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 4.17
260 TestStartStop/group/old-k8s-version/serial/Pause 29.55
261 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.09
262 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.47
263 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 4.31
264 TestStartStop/group/embed-certs/serial/Pause 32.52
265 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.06
266 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.61
268 TestStartStop/group/newest-cni/serial/FirstStart 520.5
269 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 4.33
270 TestStartStop/group/no-preload/serial/Pause 28.33
271 TestNetworkPlugins/group/auto/Start 198.13
272 TestNetworkPlugins/group/false/Start 183.04
273 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 8.12
274 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 8.02
275 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 4.57
276 TestStartStop/group/default-k8s-different-port/serial/Pause 27.65
277 TestNetworkPlugins/group/cilium/Start 433.7
278 TestNetworkPlugins/group/auto/KubeletFlags 3.83
279 TestNetworkPlugins/group/auto/NetCatPod 16.12
280 TestNetworkPlugins/group/auto/DNS 0.7
281 TestNetworkPlugins/group/auto/Localhost 0.7
282 TestNetworkPlugins/group/false/KubeletFlags 4.13
283 TestNetworkPlugins/group/auto/HairPin 5.75
284 TestNetworkPlugins/group/false/NetCatPod 24.75
286 TestNetworkPlugins/group/false/DNS 0.74
287 TestNetworkPlugins/group/false/Localhost 0.6
288 TestNetworkPlugins/group/false/HairPin 5.74
290 TestStartStop/group/newest-cni/serial/DeployApp 0
291 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 10.32
292 TestStartStop/group/newest-cni/serial/Stop 19.75
293 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 4.41
294 TestStartStop/group/newest-cni/serial/SecondStart 101.54
295 TestNetworkPlugins/group/cilium/ControllerPod 5.09
296 TestNetworkPlugins/group/cilium/KubeletFlags 4.52
297 TestNetworkPlugins/group/cilium/NetCatPod 37.88
298 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
299 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
300 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 4.5
302 TestNetworkPlugins/group/cilium/DNS 1.05
303 TestNetworkPlugins/group/cilium/Localhost 1.45
304 TestNetworkPlugins/group/cilium/HairPin 1.13
305 TestNetworkPlugins/group/enable-default-cni/Start 206.92
307 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 4.33
308 TestNetworkPlugins/group/enable-default-cni/NetCatPod 26.67
311 TestNetworkPlugins/group/kubenet/Start 380.53
312 TestNetworkPlugins/group/kubenet/KubeletFlags 3.59
313 TestNetworkPlugins/group/kubenet/NetCatPod 13.96
x
+
TestDownloadOnly/v1.14.0/json-events (21.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210816230902-111344 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210816230902-111344 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker: (21.0297269s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (21.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
--- PASS: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20210816230902-111344
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20210816230902-111344: exit status 85 (555.7291ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 23:09:04
	Running on machine: windows-server-2
	Binary: Built with gc go1.16.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 23:09:04.556699  112164 out.go:298] Setting OutFile to fd 656 ...
	I0816 23:09:04.558455  112164 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:09:04.558455  112164 out.go:311] Setting ErrFile to fd 660...
	I0816 23:09:04.558455  112164 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0816 23:09:04.572186  112164 root.go:291] Error reading config file at C:\Users\jenkins\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0816 23:09:04.578067  112164 out.go:305] Setting JSON to true
	I0816 23:09:04.582051  112164 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8363391,"bootTime":1620791953,"procs":141,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0816 23:09:04.582051  112164 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0816 23:09:04.586793  112164 notify.go:169] Checking for updates...
	I0816 23:09:04.590063  112164 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 23:09:06.231924  112164 docker.go:132] docker version: linux-20.10.2
	I0816 23:09:06.238594  112164 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:09:06.912425  112164 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 23:09:06.6117324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:09:06.915160  112164 start.go:278] selected driver: docker
	I0816 23:09:06.915385  112164 start.go:751] validating driver "docker" against <nil>
	I0816 23:09:06.933270  112164 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:09:07.586914  112164 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 23:09:07.2959747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:09:07.587138  112164 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0816 23:09:07.748649  112164 start_flags.go:344] Using suggested 15300MB memory alloc based on sys=61438MB, container=20001MB
	I0816 23:09:07.749036  112164 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 23:09:07.749207  112164 cni.go:93] Creating CNI manager for ""
	I0816 23:09:07.749207  112164 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0816 23:09:07.749207  112164 start_flags.go:277] config:
	{Name:download-only-20210816230902-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210816230902-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 23:09:07.752098  112164 cache.go:117] Beginning downloading kic base image for docker with docker
	I0816 23:09:07.754310  112164 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0816 23:09:07.754310  112164 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 23:09:07.820616  112164 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0816 23:09:07.820803  112164 cache.go:56] Caching tarball of preloaded images
	I0816 23:09:07.822311  112164 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0816 23:09:07.825908  112164 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 23:09:07.921207  112164 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4?checksum=md5:cdcafd56ec108ba69c9fa94a2cd82e35 -> C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0816 23:09:08.163940  112164 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 to local cache
	I0816 23:09:08.164497  112164 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.25-1628619379-12032@sha256_937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar
	I0816 23:09:08.164913  112164 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.25-1628619379-12032@sha256_937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar
	I0816 23:09:08.164913  112164 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory
	I0816 23:09:08.165666  112164 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 to local cache
	I0816 23:09:12.580326  112164 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 23:09:12.582365  112164 preload.go:254] verifying checksumm of C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 23:09:14.910848  112164 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0816 23:09:14.911221  112164 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\download-only-20210816230902-111344\config.json ...
	I0816 23:09:14.911791  112164 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\download-only-20210816230902-111344\config.json: {Name:mk373336e0398c8b50cd6331e74fec7d6d71c347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 23:09:14.912429  112164 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0816 23:09:14.914404  112164 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins\minikube-integration\.minikube\cache\windows\v1.14.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210816230902-111344"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (13.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210816230902-111344 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=docker --driver=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210816230902-111344 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=docker --driver=docker: (13.6355226s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (13.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
--- PASS: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20210816230902-111344
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20210816230902-111344: exit status 85 (421.5433ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 23:09:24
	Running on machine: windows-server-2
	Binary: Built with gc go1.16.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 23:09:24.824443   77228 out.go:298] Setting OutFile to fd 736 ...
	I0816 23:09:24.825994   77228 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:09:24.825994   77228 out.go:311] Setting ErrFile to fd 740...
	I0816 23:09:24.825994   77228 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0816 23:09:24.842652   77228 root.go:291] Error reading config file at C:\Users\jenkins\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0816 23:09:24.843569   77228 out.go:305] Setting JSON to true
	I0816 23:09:24.847533   77228 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8363412,"bootTime":1620791952,"procs":142,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0816 23:09:24.847533   77228 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0816 23:09:24.852089   77228 notify.go:169] Checking for updates...
	I0816 23:09:24.855819   77228 config.go:177] Loaded profile config "download-only-20210816230902-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	W0816 23:09:24.856439   77228 start.go:659] api.Load failed for download-only-20210816230902-111344: filestore "download-only-20210816230902-111344": Docker machine "download-only-20210816230902-111344" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 23:09:24.856988   77228 driver.go:335] Setting default libvirt URI to qemu:///system
	W0816 23:09:24.857293   77228 start.go:659] api.Load failed for download-only-20210816230902-111344: filestore "download-only-20210816230902-111344": Docker machine "download-only-20210816230902-111344" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 23:09:26.513843   77228 docker.go:132] docker version: linux-20.10.2
	I0816 23:09:26.521012   77228 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:09:27.162983   77228 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 23:09:26.8777009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:09:27.165314   77228 start.go:278] selected driver: docker
	I0816 23:09:27.165314   77228 start.go:751] validating driver "docker" against &{Name:download-only-20210816230902-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210816230902-111344 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 23:09:27.183480   77228 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:09:27.819890   77228 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 23:09:27.5325631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:09:27.868314   77228 cni.go:93] Creating CNI manager for ""
	I0816 23:09:27.868314   77228 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0816 23:09:27.868314   77228 start_flags.go:277] config:
	{Name:download-only-20210816230902-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210816230902-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 23:09:27.871006   77228 cache.go:117] Beginning downloading kic base image for docker with docker
	I0816 23:09:27.873134   77228 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0816 23:09:27.873554   77228 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 23:09:27.929599   77228 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0816 23:09:27.930375   77228 cache.go:56] Caching tarball of preloaded images
	I0816 23:09:27.930919   77228 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0816 23:09:27.933498   77228 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 ...
	I0816 23:09:28.015290   77228 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4?checksum=md5:3231aae7a1f1d991e6e500ed4461f6b3 -> C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4
	I0816 23:09:28.281334   77228 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 to local cache
	I0816 23:09:28.281334   77228 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.25-1628619379-12032@sha256_937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar
	I0816 23:09:28.282186   77228 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.25-1628619379-12032@sha256_937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar
	I0816 23:09:28.282186   77228 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory
	I0816 23:09:28.282318   77228 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory, skipping pull
	I0816 23:09:28.282318   77228 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in cache, skipping pull
	I0816 23:09:28.282525   77228 cache.go:148] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 as a tarball
	I0816 23:09:34.196503   77228 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 ...
	I0816 23:09:34.197811   77228 preload.go:254] verifying checksumm of C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.21.3-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210816230902-111344"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (16.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210816230902-111344 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210816230902-111344 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=docker --driver=docker: (16.5859589s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (16.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20210816230902-111344
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20210816230902-111344: exit status 85 (433.1422ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 23:09:39
	Running on machine: windows-server-2
	Binary: Built with gc go1.16.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 23:09:39.036540    6756 out.go:298] Setting OutFile to fd 744 ...
	I0816 23:09:39.038548    6756 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:09:39.038548    6756 out.go:311] Setting ErrFile to fd 648...
	I0816 23:09:39.038548    6756 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0816 23:09:39.053937    6756 root.go:291] Error reading config file at C:\Users\jenkins\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0816 23:09:39.054934    6756 out.go:305] Setting JSON to true
	I0816 23:09:39.059115    6756 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8363426,"bootTime":1620791953,"procs":142,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0816 23:09:39.059334    6756 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0816 23:09:39.063943    6756 notify.go:169] Checking for updates...
	I0816 23:09:39.071301    6756 config.go:177] Loaded profile config "download-only-20210816230902-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	W0816 23:09:39.071570    6756 start.go:659] api.Load failed for download-only-20210816230902-111344: filestore "download-only-20210816230902-111344": Docker machine "download-only-20210816230902-111344" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 23:09:39.071804    6756 driver.go:335] Setting default libvirt URI to qemu:///system
	W0816 23:09:39.072015    6756 start.go:659] api.Load failed for download-only-20210816230902-111344: filestore "download-only-20210816230902-111344": Docker machine "download-only-20210816230902-111344" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 23:09:40.704054    6756 docker.go:132] docker version: linux-20.10.2
	I0816 23:09:40.710927    6756 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:09:41.373841    6756 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 23:09:41.0787796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:09:41.376883    6756 start.go:278] selected driver: docker
	I0816 23:09:41.377024    6756 start.go:751] validating driver "docker" against &{Name:download-only-20210816230902-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210816230902-111344 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 23:09:41.397809    6756 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:09:42.052442    6756 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2021-08-16 23:09:41.756229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] C
lientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:09:42.101750    6756 cni.go:93] Creating CNI manager for ""
	I0816 23:09:42.101750    6756 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0816 23:09:42.101750    6756 start_flags.go:277] config:
	{Name:download-only-20210816230902-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210816230902-111344 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 23:09:42.104919    6756 cache.go:117] Beginning downloading kic base image for docker with docker
	I0816 23:09:42.106529    6756 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0816 23:09:42.106529    6756 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local docker daemon
	I0816 23:09:42.170374    6756 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0816 23:09:42.171328    6756 cache.go:56] Caching tarball of preloaded images
	I0816 23:09:42.171871    6756 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0816 23:09:42.174522    6756 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 23:09:42.248118    6756 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:24e0063355d7da59de0c5d619223de56 -> C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0816 23:09:42.520319    6756 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 to local cache
	I0816 23:09:42.520682    6756 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.25-1628619379-12032@sha256_937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar
	I0816 23:09:42.520917    6756 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.25-1628619379-12032@sha256_937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6.tar
	I0816 23:09:42.520917    6756 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory
	I0816 23:09:42.520917    6756 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 in local cache directory, skipping pull
	I0816 23:09:42.521262    6756 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 exists in cache, skipping pull
	I0816 23:09:42.521744    6756 cache.go:148] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 as a tarball
	I0816 23:09:50.685276    6756 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 23:09:50.686110    6756 preload.go:254] verifying checksumm of C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 23:09:52.619313    6756 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on docker
	I0816 23:09:52.620298    6756 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\download-only-20210816230902-111344\config.json ...
	I0816 23:09:52.622321    6756 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0816 23:09:52.623448    6756 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.22.0-rc.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.0-rc.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins\minikube-integration\.minikube\cache\windows\v1.22.0-rc.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210816230902-111344"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.43s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (5.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:189: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (5.2606434s)
--- PASS: TestDownloadOnly/DeleteAll (5.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (3.62s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20210816230902-111344
aaa_download_only_test.go:201: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20210816230902-111344: (3.6223066s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (3.62s)

                                                
                                    
x
+
TestDownloadOnlyKic (42.36s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20210816231008-111344 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20210816231008-111344 --force --alsologtostderr --driver=docker: (35.5040183s)
helpers_test.go:176: Cleaning up "download-docker-20210816231008-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20210816231008-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20210816231008-111344: (4.2706171s)
--- PASS: TestDownloadOnlyKic (42.36s)

                                                
                                    
x
+
TestOffline (298.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20210817001119-111344 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-20210817001119-111344 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (4m40.250175s)
helpers_test.go:176: Cleaning up "offline-docker-20210817001119-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20210817001119-111344

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20210817001119-111344: (18.5380206s)
--- PASS: TestOffline (298.79s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (88.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-qpljr" [d3ba2bc0-1f97-4585-b915-0a3bc1cb588f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 27.6865ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210816231050-111344 replace --force -f testdata\nginx-ingv1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Done: kubectl --context addons-20210816231050-111344 replace --force -f testdata\nginx-ingv1.yaml: (1.4977148s)
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210816231050-111344 replace --force -f testdata\nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [157c6762-9e6f-439a-8539-eacc9f514d13] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [157c6762-9e6f-439a-8539-eacc9f514d13] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [157c6762-9e6f-439a-8539-eacc9f514d13] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 39.0559232s
addons_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210816231050-111344 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:204: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210816231050-111344 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (4.1336916s)
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210816231050-111344 replace --force -f testdata\nginx-ingv1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Done: kubectl --context addons-20210816231050-111344 replace --force -f testdata\nginx-ingv1.yaml: (1.2694423s)
addons_test.go:242: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210816231050-111344 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210816231050-111344 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (3.912106s)
addons_test.go:265: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210816231050-111344 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:265: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210816231050-111344 addons disable ingress --alsologtostderr -v=1: (37.8242784s)
--- PASS: TestAddons/parallel/Ingress (88.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (9.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 55.0618ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-77c99ccb96-glflj" [a5ac0575-8d04-4cc3-83b9-cc0986a8d8e4] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0675912s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210816231050-111344 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:386: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210816231050-111344 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:386: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210816231050-111344 addons disable metrics-server --alsologtostderr -v=1: (4.0794328s)
--- PASS: TestAddons/parallel/MetricsServer (9.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (80.61s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 26.9146ms
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-768d69497-8dxp6" [0e80b784-328e-43b9-a861-307406fea486] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0846243s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210816231050-111344 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210816231050-111344 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (38.5558225s)
addons_test.go:432: kubectl --context addons-20210816231050-111344 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210816231050-111344 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210816231050-111344 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (24.8275714s)
addons_test.go:432: kubectl --context addons-20210816231050-111344 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210816231050-111344 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210816231050-111344 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (6.9808415s)
addons_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210816231050-111344 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210816231050-111344 addons disable helm-tiller --alsologtostderr -v=1: (3.9106197s)
--- PASS: TestAddons/parallel/HelmTiller (80.61s)

                                                
                                    
x
+
TestAddons/parallel/Olm (197.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 56.0719ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 65.6773ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:471: packageserver stabilized in 82.2423ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "catalog-operator-75d496484d-tm95b" [3a96f010-d3ca-4939-9cc3-ff48787bd7fb] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.0528562s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "olm-operator-859c88c96-s6g8z" [06155659-3ca9-49df-a951-fa5ad1d0bf80] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.1057919s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:343: "packageserver-6db76c9f-6lv6w" [e5a4e308-dbd4-49e5-8a53-f8c455e382e1] Running
helpers_test.go:343: "packageserver-6db76c9f-x47pk" [8274ee6c-807b-4cc8-964a-c562c9fb160b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6db76c9f-6lv6w" [e5a4e308-dbd4-49e5-8a53-f8c455e382e1] Running
helpers_test.go:343: "packageserver-6db76c9f-x47pk" [8274ee6c-807b-4cc8-964a-c562c9fb160b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6db76c9f-6lv6w" [e5a4e308-dbd4-49e5-8a53-f8c455e382e1] Running
helpers_test.go:343: "packageserver-6db76c9f-x47pk" [8274ee6c-807b-4cc8-964a-c562c9fb160b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6db76c9f-6lv6w" [e5a4e308-dbd4-49e5-8a53-f8c455e382e1] Running
helpers_test.go:343: "packageserver-6db76c9f-x47pk" [8274ee6c-807b-4cc8-964a-c562c9fb160b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6db76c9f-6lv6w" [e5a4e308-dbd4-49e5-8a53-f8c455e382e1] Running
helpers_test.go:343: "packageserver-6db76c9f-x47pk" [8274ee6c-807b-4cc8-964a-c562c9fb160b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6db76c9f-6lv6w" [e5a4e308-dbd4-49e5-8a53-f8c455e382e1] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.0781182s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:343: "operatorhubio-catalog-ql8cg" [0c657e30-09fa-41ed-a70f-37b3ada16071] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.0730544s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210816231050-111344 create -f testdata\etcd.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:487: (dbg) Done: kubectl --context addons-20210816231050-111344 create -f testdata\etcd.yaml: (1.2119734s)
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816231050-111344 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816231050-111344 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816231050-111344 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816231050-111344 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816231050-111344 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816231050-111344 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816231050-111344 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:499: kubectl --context addons-20210816231050-111344 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816231050-111344 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816231050-111344 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816231050-111344 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816231050-111344 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816231050-111344 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816231050-111344 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816231050-111344 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816231050-111344 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (197.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (136.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 54.3264ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210816231050-111344 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210816231050-111344 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210816231050-111344 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210816231050-111344 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [5efb78ef-ef1c-47ac-b695-ab45029ee007] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [5efb78ef-ef1c-47ac-b695-ab45029ee007] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [5efb78ef-ef1c-47ac-b695-ab45029ee007] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 51.0847496s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210816231050-111344 create -f testdata\csi-hostpath-driver\snapshot.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210816231050-111344 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:426: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210816231050-111344 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210816231050-111344 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210816231050-111344 delete pod task-pv-pod: (14.71194s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210816231050-111344 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210816231050-111344 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210816231050-111344 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210816231050-111344 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210816231050-111344 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [1bc2e98b-3b28-4f3c-b2d8-45f40321b11b] Pending
helpers_test.go:343: "task-pv-pod-restore" [1bc2e98b-3b28-4f3c-b2d8-45f40321b11b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [1bc2e98b-3b28-4f3c-b2d8-45f40321b11b] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 42.07822s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210816231050-111344 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-20210816231050-111344 delete pod task-pv-pod-restore: (4.9702622s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210816231050-111344 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210816231050-111344 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210816231050-111344 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210816231050-111344 addons disable csi-hostpath-driver --alsologtostderr -v=1: (10.7623097s)
addons_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210816231050-111344 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210816231050-111344 addons disable volumesnapshots --alsologtostderr -v=1: (4.0684654s)
--- PASS: TestAddons/parallel/CSI (136.49s)

                                                
                                    
x
+
TestDockerFlags (210.21s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20210817001618-111344 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-20210817001618-111344 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (3m4.0894414s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20210817001618-111344 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20210817001618-111344 ssh "sudo systemctl show docker --property=Environment --no-pager": (4.3275946s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20210817001618-111344 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20210817001618-111344 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (4.322175s)
helpers_test.go:176: Cleaning up "docker-flags-20210817001618-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20210817001618-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20210817001618-111344: (17.4673654s)
--- PASS: TestDockerFlags (210.21s)

                                                
                                    
x
+
TestForceSystemdFlag (205.62s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20210817001912-111344 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-20210817001912-111344 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (2m57.1382842s)
docker_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20210817001912-111344 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20210817001912-111344 ssh "docker info --format {{.CgroupDriver}}": (7.8731548s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210817001912-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20210817001912-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20210817001912-111344: (20.6034051s)
--- PASS: TestForceSystemdFlag (205.62s)

                                                
                                    
x
+
TestForceSystemdEnv (236.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20210817001119-111344 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-20210817001119-111344 --memory=2048 --alsologtostderr -v=5 --driver=docker: (3m29.0864154s)
docker_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20210817001119-111344 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-20210817001119-111344 ssh "docker info --format {{.CgroupDriver}}": (7.3591741s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210817001119-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20210817001119-111344

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20210817001119-111344: (19.8490215s)
--- PASS: TestForceSystemdEnv (236.30s)

                                                
                                    
x
+
TestErrorSpam/setup (98.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20210816232053-111344 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 --driver=docker
E0816 23:22:00.930328  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:00.939113  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:00.949833  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:00.971015  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:01.012950  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:01.093244  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:01.253645  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:01.574213  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:02.216693  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:03.497639  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:06.059157  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:11.181064  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:22:21.423085  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
error_spam_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20210816232053-111344 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 --driver=docker: (1m38.8079913s)
--- PASS: TestErrorSpam/setup (98.81s)

                                                
                                    
x
+
TestErrorSpam/start (10.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 start --dry-run: (3.7436054s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 start --dry-run: (3.5147906s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 start --dry-run
E0816 23:22:41.904771  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 start --dry-run: (3.5813094s)
--- PASS: TestErrorSpam/start (10.84s)

                                                
                                    
x
+
TestErrorSpam/status (11.21s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 status
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 status: (3.7702236s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 status
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 status: (3.7176321s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 status
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 status: (3.7209654s)
--- PASS: TestErrorSpam/status (11.21s)

                                                
                                    
x
+
TestErrorSpam/pause (10.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 pause: (3.9501614s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 pause: (3.3561837s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 pause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 pause: (3.3607063s)
--- PASS: TestErrorSpam/pause (10.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (10.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 unpause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 unpause: (3.9407779s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 unpause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 unpause: (3.5000948s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 unpause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 unpause: (3.3625275s)
--- PASS: TestErrorSpam/unpause (10.81s)

                                                
                                    
x
+
TestErrorSpam/stop (21.37s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 stop
E0816 23:23:22.867620  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 stop: (10.5376406s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 stop
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 stop: (5.4119272s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 stop
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210816232053-111344 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210816232053-111344 stop: (5.413909s)
--- PASS: TestErrorSpam/stop (21.37s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: C:\Users\jenkins\minikube-integration\.minikube\files\etc\test\nested\copy\111344\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (140.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0816 23:24:44.799867  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
functional_test.go:1982: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (2m20.1258947s)
--- PASS: TestFunctional/serial/StartWithProxy (140.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (19.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --alsologtostderr -v=8: (19.5213665s)
functional_test.go:631: soft start took 19.5252257s for "functional-20210816232348-111344" cluster.
--- PASS: TestFunctional/serial/SoftStart (19.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.15s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210816232348-111344 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (15.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache add k8s.gcr.io/pause:3.1: (5.3647627s)
functional_test.go:982: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache add k8s.gcr.io/pause:3.3: (5.269839s)
functional_test.go:982: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache add k8s.gcr.io/pause:latest: (5.30077s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (15.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (6.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210816232348-111344 C:\Users\jenkins\AppData\Local\Temp\functional-20210816232348-111344869103595
functional_test.go:1012: (dbg) Done: docker build -t minikube-local-cache-test:functional-20210816232348-111344 C:\Users\jenkins\AppData\Local\Temp\functional-20210816232348-111344869103595: (1.1434508s)
functional_test.go:1024: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache add minikube-local-cache-test:functional-20210816232348-111344
functional_test.go:1024: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache add minikube-local-cache-test:functional-20210816232348-111344: (4.2329977s)
functional_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache delete minikube-local-cache-test:functional-20210816232348-111344
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210816232348-111344
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (6.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh sudo crictl images
functional_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh sudo crictl images: (3.5606396s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (15.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1078: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh sudo docker rmi k8s.gcr.io/pause:latest: (3.5606759s)
functional_test.go:1084: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
E0816 23:27:00.942314  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (3.5718363s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cache reload: (4.5311383s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1094: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (3.5204549s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (15.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 kubectl -- --context functional-20210816232348-111344 get pods
functional_test.go:678: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 kubectl -- --context functional-20210816232348-111344 get pods: (2.2522007s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.25s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out\kubectl.exe --context functional-20210816232348-111344 get pods
functional_test.go:701: (dbg) Done: out\kubectl.exe --context functional-20210816232348-111344 get pods: (1.8242494s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.84s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (102.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0816 23:27:28.649140  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
functional_test.go:715: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m42.1655549s)
functional_test.go:719: restart took 1m42.1664066s for "functional-20210816232348-111344" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (102.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210816232348-111344 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (5.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 logs
functional_test.go:1165: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 logs: (5.8843478s)
--- PASS: TestFunctional/serial/LogsCmd (5.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (5.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 logs --file C:\Users\jenkins\AppData\Local\Temp\functional-20210816232348-111344114365518\logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 logs --file C:\Users\jenkins\AppData\Local\Temp\functional-20210816232348-111344114365518\logs.txt: (5.601466s)
--- PASS: TestFunctional/serial/LogsFileCmd (5.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 config get cpus: exit status 14 (359.9483ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 config unset cpus
functional_test.go:1129: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 config get cpus: exit status 14 (322.2165ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (6.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (3.2216919s)

                                                
                                                
-- stdout --
	* [functional-20210816232348-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 23:29:58.808959   74240 out.go:298] Setting OutFile to fd 768 ...
	I0816 23:29:58.809963   74240 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:29:58.809963   74240 out.go:311] Setting ErrFile to fd 1128...
	I0816 23:29:58.809963   74240 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:29:58.829151   74240 out.go:305] Setting JSON to false
	I0816 23:29:58.844060   74240 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8364646,"bootTime":1620791952,"procs":144,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0816 23:29:58.845079   74240 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0816 23:29:58.851086   74240 out.go:177] * [functional-20210816232348-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0816 23:29:58.851086   74240 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0816 23:29:58.855562   74240 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0816 23:29:58.857384   74240 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 23:29:58.858918   74240 config.go:177] Loaded profile config "functional-20210816232348-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0816 23:29:58.861005   74240 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 23:30:00.744828   74240 docker.go:132] docker version: linux-20.10.2
	I0816 23:30:00.751880   74240 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:30:01.616380   74240 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-16 23:30:01.2016922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:30:01.619901   74240 out.go:177] * Using the docker driver based on existing profile
	I0816 23:30:01.620157   74240 start.go:278] selected driver: docker
	I0816 23:30:01.620157   74240 start.go:751] validating driver "docker" against &{Name:functional-20210816232348-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210816232348-111344 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-pr
ovisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 23:30:01.620331   74240 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 23:30:01.708506   74240 out.go:177] 
	W0816 23:30:01.708833   74240 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 23:30:01.710387   74240 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:934: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --dry-run --alsologtostderr -v=1 --driver=docker: (3.6427572s)
--- PASS: TestFunctional/parallel/DryRun (6.87s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:956: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20210816232348-111344 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (2.9048179s)

                                                
                                                
-- stdout --
	* [functional-20210816232348-111344] minikube v1.22.0 sur Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 23:30:05.630637    4560 out.go:298] Setting OutFile to fd 1292 ...
	I0816 23:30:05.631838    4560 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:30:05.631838    4560 out.go:311] Setting ErrFile to fd 1268...
	I0816 23:30:05.631838    4560 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:30:05.657253    4560 out.go:305] Setting JSON to false
	I0816 23:30:05.664913    4560 start.go:111] hostinfo: {"hostname":"windows-server-2","uptime":8364652,"bootTime":1620791953,"procs":143,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2f8328f4-5428-47c7-ab5a-b32e2504bd6f"}
	W0816 23:30:05.665121    4560 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0816 23:30:05.672931    4560 out.go:177] * [functional-20210816232348-111344] minikube v1.22.0 sur Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0816 23:30:05.674627    4560 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0816 23:30:05.676558    4560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0816 23:30:05.677137    4560 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 23:30:05.678446    4560 config.go:177] Loaded profile config "functional-20210816232348-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0816 23:30:05.679458    4560 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 23:30:07.444053    4560 docker.go:132] docker version: linux-20.10.2
	I0816 23:30:07.452137    4560 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0816 23:30:08.197437    4560 info.go:263] docker info: {ID:4XCY:3GZD:KK67:IPM7:RRQF:WWZF:OGQ6:X6HQ:572M:7N57:P63G:EAE5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:51 SystemTime:2021-08-16 23:30:07.8687777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0816 23:30:08.200236    4560 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0816 23:30:08.200502    4560 start.go:278] selected driver: docker
	I0816 23:30:08.200502    4560 start.go:751] validating driver "docker" against &{Name:functional-20210816232348-111344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210816232348-111344 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-pr
ovisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 23:30:08.200680    4560 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 23:30:08.260583    4560 out.go:177] 
	W0816 23:30:08.260957    4560 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 23:30:08.262796    4560 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (12.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 status: (4.202116s)
functional_test.go:815: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:815: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (4.1954075s)
functional_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 status -o json: (4.2036534s)
--- PASS: TestFunctional/parallel/StatusCmd (12.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 addons list
functional_test.go:1465: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 addons list: (2.1665235s)
functional_test.go:1476: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (69.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [83eb40ac-73e3-4bef-bdf3-dc127335d2ef] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0352648s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210816232348-111344 get storageclass -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210816232348-111344 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210816232348-111344 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210816232348-111344 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [3f2340b9-cc14-4aba-899d-2e675fb968d0] Pending
helpers_test.go:343: "sp-pod" [3f2340b9-cc14-4aba-899d-2e675fb968d0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [3f2340b9-cc14-4aba-899d-2e675fb968d0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 44.0367262s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210816232348-111344 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210816232348-111344 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210816232348-111344 delete -f testdata/storage-provisioner/pod.yaml: (9.0357035s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210816232348-111344 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [b404ad4f-aa36-4461-b6f6-797b160b4203] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [b404ad4f-aa36-4461-b6f6-797b160b4203] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0200073s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210816232348-111344 exec sp-pod -- ls /tmp/mount
E0816 23:32:00.953092  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (69.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (7.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "echo hello": (3.9343588s)
functional_test.go:1515: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1515: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "cat /etc/hostname": (3.9676424s)
--- PASS: TestFunctional/parallel/SSHCmd (7.90s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (7.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 cp testdata\cp-test.txt /home/docker/cp-test.txt: (3.7945404s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /home/docker/cp-test.txt": (4.0977623s)
--- PASS: TestFunctional/parallel/CpCmd (7.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (60.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210816232348-111344 replace --force -f testdata\mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-9bbbc5bbb-m24cz" [f5fb024e-40d7-4d41-828e-a8348f246663] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-m24cz" [f5fb024e-40d7-4d41-828e-a8348f246663] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-m24cz" [f5fb024e-40d7-4d41-828e-a8348f246663] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 41.1104319s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;": exit status 1 (844.7087ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;": exit status 1 (731.284ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;": exit status 1 (841.6591ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;": exit status 1 (504.7254ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;": exit status 1 (417.5536ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816232348-111344 exec mysql-9bbbc5bbb-m24cz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (60.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/111344/hosts within VM

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1679: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /etc/test/nested/copy/111344/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1679: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /etc/test/nested/copy/111344/hosts": (4.0812701s)
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (23.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/111344.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /etc/ssl/certs/111344.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1720: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /etc/ssl/certs/111344.pem": (3.8538833s)
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/111344.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /usr/share/ca-certificates/111344.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1720: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /usr/share/ca-certificates/111344.pem": (3.988427s)
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1720: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /etc/ssl/certs/51391683.0": (3.8467735s)
functional_test.go:1746: Checking for existence of /etc/ssl/certs/1113442.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /etc/ssl/certs/1113442.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1747: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /etc/ssl/certs/1113442.pem": (3.9946811s)
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/1113442.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /usr/share/ca-certificates/1113442.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1747: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /usr/share/ca-certificates/1113442.pem": (3.9219336s)
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1747: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (3.871534s)
--- PASS: TestFunctional/parallel/CertSync (23.48s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (15.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:476: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210816232348-111344 docker-env | Invoke-Expression ;out/minikube-windows-amd64.exe status -p functional-20210816232348-111344"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:476: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210816232348-111344 docker-env | Invoke-Expression ;out/minikube-windows-amd64.exe status -p functional-20210816232348-111344": (9.5194325s)
functional_test.go:500: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210816232348-111344 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:500: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210816232348-111344 docker-env | Invoke-Expression ; docker images": (5.7600902s)
--- PASS: TestFunctional/parallel/DockerEnv (15.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210816232348-111344 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (11.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Done: docker pull busybox:1.33: (3.2531926s)
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210816232348-111344

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image load docker.io/library/busybox:load-functional-20210816232348-111344

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image load docker.io/library/busybox:load-functional-20210816232348-111344: (3.9472984s)
functional_test.go:373: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210816232348-111344 -- docker image inspect docker.io/library/busybox:load-functional-20210816232348-111344

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:373: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210816232348-111344 -- docker image inspect docker.io/library/busybox:load-functional-20210816232348-111344: (4.0569566s)
--- PASS: TestFunctional/parallel/LoadImage (11.78s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (16.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Done: docker pull busybox:1.32: (3.5610444s)
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210816232348-111344

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image load docker.io/library/busybox:remove-functional-20210816232348-111344

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image load docker.io/library/busybox:remove-functional-20210816232348-111344: (5.1986603s)
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image rm docker.io/library/busybox:remove-functional-20210816232348-111344

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image rm docker.io/library/busybox:remove-functional-20210816232348-111344: (3.0697275s)
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210816232348-111344 -- docker images

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210816232348-111344 -- docker images: (3.7543607s)
--- PASS: TestFunctional/parallel/RemoveImage (16.11s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (13.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Done: docker pull busybox:1.31: (3.7449912s)
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210816232348-111344
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210816232348-111344

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image load C:\jenkins\workspace\Docker_Windows_integration\busybox.tar

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image load C:\jenkins\workspace\Docker_Windows_integration\busybox.tar: (4.9479202s)
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210816232348-111344 -- docker images

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210816232348-111344 -- docker images: (3.890986s)
--- PASS: TestFunctional/parallel/LoadImageFromFile (13.75s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (14.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image build -t localhost/my-image:functional-20210816232348-111344 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image build -t localhost/my-image:functional-20210816232348-111344 testdata\build: (10.8725821s)
functional_test.go:412: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image build -t localhost/my-image:functional-20210816232348-111344 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM busybox
latest: Pulling from library/busybox
b71f96345d44: Pulling fs layer
b71f96345d44: Verifying Checksum
b71f96345d44: Download complete
b71f96345d44: Pull complete
Digest: sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
Status: Downloaded newer image for busybox:latest
---> 69593048aa3a
Step 2/3 : RUN true
---> Running in 077e7fad32c4
Removing intermediate container 077e7fad32c4
---> ef10a534cc22
Step 3/3 : ADD content.txt /
---> ab0e6b8ebf63
Successfully built ab0e6b8ebf63
Successfully tagged localhost/my-image:functional-20210816232348-111344
functional_test.go:373: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210816232348-111344 -- docker image inspect localhost/my-image:functional-20210816232348-111344

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:373: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210816232348-111344 -- docker image inspect localhost/my-image:functional-20210816232348-111344: (3.8864916s)
--- PASS: TestFunctional/parallel/BuildImage (14.76s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (3.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image ls: (3.3360194s)
functional_test.go:446: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210816232348-111344
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
functional_test.go:449: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 image ls:
! Executing "docker container inspect functional-20210816232348-111344 --format={{.State.Status}}" took an unusually long time: 2.057105s
* Restarting the docker service may improve performance.
--- PASS: TestFunctional/parallel/ListImages (3.34s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 ssh "sudo systemctl is-active crio": exit status 1 (4.2297746s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 version --short
--- PASS: TestFunctional/parallel/Version/short (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (4.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 version -o=json --components: (4.588312s)
--- PASS: TestFunctional/parallel/Version/components (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 update-context --alsologtostderr -v=2: (2.5781602s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 update-context --alsologtostderr -v=2: (2.5372574s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 update-context --alsologtostderr -v=2: (2.5272671s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (6.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1202: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (2.2550101s)
functional_test.go:1206: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1206: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.1811534s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (6.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Done: out/minikube-windows-amd64.exe profile list: (4.090821s)
functional_test.go:1245: Took "4.0919248s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1259: Took "339.5345ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (4.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (4.0118403s)
functional_test.go:1295: Took "4.0123276s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1308: Took "313.1217ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20210816232348-111344 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210816232348-111344 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20210816232348-111344 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to kill pid 10840: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (1.04s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210816232348-111344
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210816232348-111344
--- PASS: TestFunctional/delete_busybox_image (1.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.47s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210816232348-111344
--- PASS: TestFunctional/delete_my-image_image (0.47s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.44s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210816232348-111344
--- PASS: TestFunctional/delete_minikube_cached_images (0.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (3.96s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20210816233753-111344 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20210816233753-111344 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (302.3562ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210816233753-111344] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"40958d03-ed1c-49d8-b7ab-7744718e3295","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=C:\\Users\\jenkins\\minikube-integration\\kubeconfig"},"datacontenttype":"application/json","id":"02cf4aff-23df-4dca-baa1-f3d3db7dae97","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins\\minikube-integration\\.minikube"},"datacontenttype":"application/json","id":"9e9f37c8-13ec-4e77-8b81-f04d94a89835","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"4f24353b-8ca0-4aac-a053-81e27c1f095d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"f710f6dd-9a18-4722-90f7-f5dd3394b14c","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210816233753-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20210816233753-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20210816233753-111344: (3.6548508s)
--- PASS: TestErrorJSONOutput (3.96s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (110.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20210816233757-111344 --network=
E0816 23:38:24.037074  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:39:09.035049  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:09.041421  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:09.051469  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:09.071884  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:09.113070  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:09.194248  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:09.354817  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:09.675981  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:10.316780  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:11.598149  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:14.159992  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:19.281423  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:39:29.523454  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20210816233757-111344 --network=: (1m39.8796085s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210816233757-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20210816233757-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20210816233757-111344: (9.9950625s)
--- PASS: TestKicCustomNetwork/create_custom_network (110.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (108.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20210816233948-111344 --network=bridge
E0816 23:39:50.005200  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:40:30.969376  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20210816233948-111344 --network=bridge: (1m38.5177158s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210816233948-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20210816233948-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20210816233948-111344: (9.3308607s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (108.28s)

                                                
                                    
x
+
TestKicExistingNetwork (107.59s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20210816234138-111344 --network=existing-network
E0816 23:41:52.895438  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:42:00.976343  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20210816234138-111344 --network=existing-network: (1m34.9488942s)
helpers_test.go:176: Cleaning up "existing-network-20210816234138-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20210816234138-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20210816234138-111344: (10.033363s)
--- PASS: TestKicExistingNetwork (107.59s)

                                                
                                    
x
+
TestMainNoArgs (0.31s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (224.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210816234324-111344 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0816 23:44:09.047255  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:44:36.743873  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:47:00.988631  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
multinode_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210816234324-111344 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (3m39.083339s)
multinode_test.go:87: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr
multinode_test.go:87: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr: (5.4255803s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (224.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (25.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:462: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (3.0399112s)
multinode_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- rollout status deployment/busybox: (4.374025s)
multinode_test.go:473: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:473: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- get pods -o jsonpath='{.items[*].status.podIP}': (1.7892381s)
multinode_test.go:485: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:485: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.8255699s)
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-8c8vg -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-8c8vg -- nslookup kubernetes.io: (3.3269217s)
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-zx8lt -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-zx8lt -- nslookup kubernetes.io: (2.9944402s)
multinode_test.go:503: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-8c8vg -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-8c8vg -- nslookup kubernetes.default: (2.0132262s)
multinode_test.go:503: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-zx8lt -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-zx8lt -- nslookup kubernetes.default: (2.0114745s)
multinode_test.go:511: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-8c8vg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-8c8vg -- nslookup kubernetes.default.svc.cluster.local: (2.0523002s)
multinode_test.go:511: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-zx8lt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-zx8lt -- nslookup kubernetes.default.svc.cluster.local: (1.9896792s)
--- PASS: TestMultiNode/serial/DeployApp2Nodes (25.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (9.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:521: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.7854569s)
multinode_test.go:529: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-8c8vg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:529: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-8c8vg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (1.9583244s)
multinode_test.go:537: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-8c8vg -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:537: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-8c8vg -- sh -c "ping -c 1 192.168.65.2": (1.9739922s)
multinode_test.go:529: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-zx8lt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:529: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-zx8lt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.0233728s)
multinode_test.go:537: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-zx8lt -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:537: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210816234324-111344 -- exec busybox-84b6686758-zx8lt -- sh -c "ping -c 1 192.168.65.2": (2.0038046s)
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (9.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (82.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20210816234324-111344 -v 3 --alsologtostderr
multinode_test.go:106: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20210816234324-111344 -v 3 --alsologtostderr: (1m15.6668104s)
multinode_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr
multinode_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr: (6.8560614s)
--- PASS: TestMultiNode/serial/AddNode (82.52s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (3.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0816 23:49:09.059421  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
multinode_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.6460021s)
--- PASS: TestMultiNode/serial/ProfileList (3.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (27.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --output json --alsologtostderr
multinode_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --output json --alsologtostderr: (6.6685329s)
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 cp testdata\cp-test.txt /home/docker/cp-test.txt: (3.0415933s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 ssh "sudo cat /home/docker/cp-test.txt": (3.4917493s)
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 cp testdata\cp-test.txt multinode-20210816234324-111344-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 cp testdata\cp-test.txt multinode-20210816234324-111344-m02:/home/docker/cp-test.txt: (3.5198912s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 ssh -n multinode-20210816234324-111344-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 ssh -n multinode-20210816234324-111344-m02 "sudo cat /home/docker/cp-test.txt": (3.4656799s)
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 cp testdata\cp-test.txt multinode-20210816234324-111344-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 cp testdata\cp-test.txt multinode-20210816234324-111344-m03:/home/docker/cp-test.txt: (3.5172495s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 ssh -n multinode-20210816234324-111344-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 ssh -n multinode-20210816234324-111344-m03 "sudo cat /home/docker/cp-test.txt": (3.4951534s)
--- PASS: TestMultiNode/serial/CopyFile (27.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (15.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 node stop m03: (4.6166618s)
multinode_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status: exit status 7 (5.5824423s)

                                                
                                                
-- stdout --
	multinode-20210816234324-111344
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210816234324-111344-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210816234324-111344-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr: exit status 7 (5.5574312s)

                                                
                                                
-- stdout --
	multinode-20210816234324-111344
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210816234324-111344-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210816234324-111344-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 23:49:47.899853   16936 out.go:298] Setting OutFile to fd 2508 ...
	I0816 23:49:47.909573   16936 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:49:47.909573   16936 out.go:311] Setting ErrFile to fd 2216...
	I0816 23:49:47.909573   16936 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:49:47.925281   16936 out.go:305] Setting JSON to false
	I0816 23:49:47.925281   16936 mustload.go:65] Loading cluster: multinode-20210816234324-111344
	I0816 23:49:47.926122   16936 config.go:177] Loaded profile config "multinode-20210816234324-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0816 23:49:47.926122   16936 status.go:253] checking status of multinode-20210816234324-111344 ...
	I0816 23:49:47.937091   16936 cli_runner.go:115] Run: docker container inspect multinode-20210816234324-111344 --format={{.State.Status}}
	I0816 23:49:49.652033   16936 cli_runner.go:168] Completed: docker container inspect multinode-20210816234324-111344 --format={{.State.Status}}: (1.7148755s)
	I0816 23:49:49.652336   16936 status.go:328] multinode-20210816234324-111344 host status = "Running" (err=<nil>)
	I0816 23:49:49.652336   16936 host.go:66] Checking if "multinode-20210816234324-111344" exists ...
	I0816 23:49:49.661180   16936 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210816234324-111344
	I0816 23:49:50.074925   16936 host.go:66] Checking if "multinode-20210816234324-111344" exists ...
	I0816 23:49:50.083730   16936 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 23:49:50.089737   16936 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816234324-111344
	I0816 23:49:50.503719   16936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\multinode-20210816234324-111344\id_rsa Username:docker}
	I0816 23:49:50.647020   16936 ssh_runner.go:149] Run: systemctl --version
	I0816 23:49:50.669294   16936 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 23:49:50.702804   16936 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210816234324-111344
	I0816 23:49:51.121420   16936 kubeconfig.go:93] found "multinode-20210816234324-111344" server: "https://127.0.0.1:55036"
	I0816 23:49:51.121635   16936 api_server.go:164] Checking apiserver status ...
	I0816 23:49:51.129520   16936 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 23:49:51.180311   16936 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2230/cgroup
	I0816 23:49:51.204750   16936 api_server.go:180] apiserver freezer: "7:freezer:/docker/83ea2519c8191992fc3066cee96feacf60e340d89008c6eee75a551013831276/kubepods/burstable/pod5d8457d6acc8d7264171020456771f34/479a7392778e81f4016e34511a29f4e903acc7540ce92e7c6b9512fd4906a95e"
	I0816 23:49:51.214895   16936 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/83ea2519c8191992fc3066cee96feacf60e340d89008c6eee75a551013831276/kubepods/burstable/pod5d8457d6acc8d7264171020456771f34/479a7392778e81f4016e34511a29f4e903acc7540ce92e7c6b9512fd4906a95e/freezer.state
	I0816 23:49:51.239412   16936 api_server.go:202] freezer state: "THAWED"
	I0816 23:49:51.239412   16936 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55036/healthz ...
	I0816 23:49:51.259691   16936 api_server.go:265] https://127.0.0.1:55036/healthz returned 200:
	ok
	I0816 23:49:51.259691   16936 status.go:419] multinode-20210816234324-111344 apiserver status = Running (err=<nil>)
	I0816 23:49:51.259691   16936 status.go:255] multinode-20210816234324-111344 status: &{Name:multinode-20210816234324-111344 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 23:49:51.260379   16936 status.go:253] checking status of multinode-20210816234324-111344-m02 ...
	I0816 23:49:51.272789   16936 cli_runner.go:115] Run: docker container inspect multinode-20210816234324-111344-m02 --format={{.State.Status}}
	I0816 23:49:51.697191   16936 status.go:328] multinode-20210816234324-111344-m02 host status = "Running" (err=<nil>)
	I0816 23:49:51.697191   16936 host.go:66] Checking if "multinode-20210816234324-111344-m02" exists ...
	I0816 23:49:51.707867   16936 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210816234324-111344-m02
	I0816 23:49:52.147636   16936 host.go:66] Checking if "multinode-20210816234324-111344-m02" exists ...
	I0816 23:49:52.156556   16936 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 23:49:52.162237   16936 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210816234324-111344-m02
	I0816 23:49:52.583022   16936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55044 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\multinode-20210816234324-111344-m02\id_rsa Username:docker}
	I0816 23:49:52.722269   16936 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 23:49:52.754612   16936 status.go:255] multinode-20210816234324-111344-m02 status: &{Name:multinode-20210816234324-111344-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 23:49:52.754612   16936 status.go:253] checking status of multinode-20210816234324-111344-m03 ...
	I0816 23:49:52.769495   16936 cli_runner.go:115] Run: docker container inspect multinode-20210816234324-111344-m03 --format={{.State.Status}}
	I0816 23:49:53.188004   16936 status.go:328] multinode-20210816234324-111344-m03 host status = "Stopped" (err=<nil>)
	I0816 23:49:53.188004   16936 status.go:341] host is not running, skipping remaining checks
	I0816 23:49:53.188004   16936 status.go:255] multinode-20210816234324-111344-m03 status: &{Name:multinode-20210816234324-111344-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (15.76s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 node start m03 --alsologtostderr
multinode_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 node start m03 --alsologtostderr: (32.1739977s)
multinode_test.go:242: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status
multinode_test.go:242: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status: (7.0263495s)
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (283.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20210816234324-111344
multinode_test.go:271: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20210816234324-111344
multinode_test.go:271: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-20210816234324-111344: (30.9630915s)
multinode_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210816234324-111344 --wait=true -v=8 --alsologtostderr
E0816 23:52:00.999457  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0816 23:54:09.071121  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0816 23:55:04.078188  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
multinode_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210816234324-111344 --wait=true -v=8 --alsologtostderr: (4m12.0900288s)
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20210816234324-111344
--- PASS: TestMultiNode/serial/RestartKeepsNodes (283.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (23.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 node delete m03
E0816 23:55:32.130108  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
multinode_test.go:375: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 node delete m03: (17.3580021s)
multinode_test.go:381: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr
multinode_test.go:381: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr: (5.1929473s)
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (23.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (32.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 stop
multinode_test.go:295: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 stop: (27.9555118s)
multinode_test.go:301: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status: exit status 7 (2.5757185s)

                                                
                                                
-- stdout --
	multinode-20210816234324-111344
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210816234324-111344-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr: exit status 7 (2.4358995s)

                                                
                                                
-- stdout --
	multinode-20210816234324-111344
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210816234324-111344-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 23:56:10.977401   50768 out.go:298] Setting OutFile to fd 2268 ...
	I0816 23:56:10.979355   50768 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:56:10.979355   50768 out.go:311] Setting ErrFile to fd 2352...
	I0816 23:56:10.979355   50768 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 23:56:10.994702   50768 out.go:305] Setting JSON to false
	I0816 23:56:10.994935   50768 mustload.go:65] Loading cluster: multinode-20210816234324-111344
	I0816 23:56:10.995637   50768 config.go:177] Loaded profile config "multinode-20210816234324-111344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.21.3
	I0816 23:56:10.995849   50768 status.go:253] checking status of multinode-20210816234324-111344 ...
	I0816 23:56:11.009263   50768 cli_runner.go:115] Run: docker container inspect multinode-20210816234324-111344 --format={{.State.Status}}
	I0816 23:56:12.720138   50768 cli_runner.go:168] Completed: docker container inspect multinode-20210816234324-111344 --format={{.State.Status}}: (1.7105654s)
	I0816 23:56:12.720138   50768 status.go:328] multinode-20210816234324-111344 host status = "Stopped" (err=<nil>)
	I0816 23:56:12.720138   50768 status.go:341] host is not running, skipping remaining checks
	I0816 23:56:12.720381   50768 status.go:255] multinode-20210816234324-111344 status: &{Name:multinode-20210816234324-111344 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 23:56:12.720381   50768 status.go:253] checking status of multinode-20210816234324-111344-m02 ...
	I0816 23:56:12.734154   50768 cli_runner.go:115] Run: docker container inspect multinode-20210816234324-111344-m02 --format={{.State.Status}}
	I0816 23:56:13.150341   50768 status.go:328] multinode-20210816234324-111344-m02 host status = "Stopped" (err=<nil>)
	I0816 23:56:13.150341   50768 status.go:341] host is not running, skipping remaining checks
	I0816 23:56:13.150341   50768 status.go:255] multinode-20210816234324-111344-m02 status: &{Name:multinode-20210816234324-111344-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (32.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (168.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210816234324-111344 --wait=true -v=8 --alsologtostderr --driver=docker
E0816 23:57:01.011662  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
multinode_test.go:335: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210816234324-111344 --wait=true -v=8 --alsologtostderr --driver=docker: (2m42.1273336s)
multinode_test.go:341: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr
multinode_test.go:341: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210816234324-111344 status --alsologtostderr: (5.4451824s)
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (168.51s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (125.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20210816234324-111344
multinode_test.go:433: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210816234324-111344-m02 --driver=docker
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20210816234324-111344-m02 --driver=docker: exit status 14 (330.4589ms)

                                                
                                                
-- stdout --
	* [multinode-20210816234324-111344-m02] minikube v1.22.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210816234324-111344-m02' is duplicated with machine name 'multinode-20210816234324-111344-m02' in profile 'multinode-20210816234324-111344'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210816234324-111344-m03 --driver=docker
E0816 23:59:09.081397  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
multinode_test.go:441: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210816234324-111344-m03 --driver=docker: (1m46.7195024s)
multinode_test.go:448: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20210816234324-111344
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20210816234324-111344: exit status 80 (4.894042s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210816234324-111344
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210816234324-111344-m03 already exists in multinode-20210816234324-111344-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                              │
	│    * If the above advice does not help, please let us know:                                                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                │
	│                                                                                                              │
	│    * Please attach the following file to the GitHub issue:                                                   │
	│    * - C:\Users\jenkins\AppData\Local\Temp\minikube_node_68dc163ecc1470275f97c1774d2d827d0925d552_116.log    │
	│                                                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20210816234324-111344-m03
multinode_test.go:453: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20210816234324-111344-m03: (13.128125s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (125.38s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (0.00s)

                                                
                                    
x
+
TestPreload (240.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20210817000127-111344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
E0817 00:02:01.023780  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
preload_test.go:48: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20210817000127-111344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (2m14.568225s)
preload_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20210817000127-111344 -- docker pull busybox
preload_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20210817000127-111344 -- docker pull busybox: (6.2055702s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20210817000127-111344 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
E0817 00:04:09.093875  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20210817000127-111344 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (1m24.4114937s)
preload_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20210817000127-111344 -- docker images
preload_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20210817000127-111344 -- docker images: (3.7459685s)
helpers_test.go:176: Cleaning up "test-preload-20210817000127-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20210817000127-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20210817000127-111344: (11.906576s)
--- PASS: TestPreload (240.84s)

                                                
                                    
x
+
TestScheduledStopWindows (140.6s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20210817000528-111344 --memory=2048 --driver=docker
E0817 00:07:01.034795  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20210817000528-111344 --memory=2048 --driver=docker: (1m37.5414093s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20210817000528-111344 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20210817000528-111344 --schedule 5m: (3.3549409s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20210817000528-111344 -n scheduled-stop-20210817000528-111344
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20210817000528-111344 -n scheduled-stop-20210817000528-111344: (3.7725861s)
scheduled_stop_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20210817000528-111344 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20210817000528-111344 -- sudo systemctl show minikube-scheduled-stop --no-page: (3.6842236s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20210817000528-111344 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20210817000528-111344 --schedule 5s: (3.559058s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-20210817000528-111344
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-20210817000528-111344: exit status 7 (2.0524304s)

                                                
                                                
-- stdout --
	scheduled-stop-20210817000528-111344
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210817000528-111344 -n scheduled-stop-20210817000528-111344
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210817000528-111344 -n scheduled-stop-20210817000528-111344: exit status 7 (2.0515304s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20210817000528-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20210817000528-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20210817000528-111344: (9.5706735s)
--- PASS: TestScheduledStopWindows (140.60s)

                                                
                                    
x
+
TestSkaffold (174.95s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\skaffold.exe360286799 version
skaffold_test.go:61: skaffold version: v1.30.0
skaffold_test.go:64: (dbg) Run:  out/minikube-windows-amd64.exe start -p skaffold-20210817000749-111344 --memory=2600 --driver=docker
E0817 00:09:09.104880  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
skaffold_test.go:64: (dbg) Done: out/minikube-windows-amd64.exe start -p skaffold-20210817000749-111344 --memory=2600 --driver=docker: (1m39.7148081s)
skaffold_test.go:84: copying out/minikube-windows-amd64.exe to C:\jenkins\workspace\Docker_Windows_integration\out\minikube.exe
skaffold_test.go:108: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\skaffold.exe360286799 run --minikube-profile skaffold-20210817000749-111344 --kube-context skaffold-20210817000749-111344 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:108: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\skaffold.exe360286799 run --minikube-profile skaffold-20210817000749-111344 --kube-context skaffold-20210817000749-111344 --status-check=true --port-forward=false --interactive=false: (49.6128438s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-6c47c556-7s45h" [c98e446b-cb34-4397-8538-4d4e66fe22f9] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.0413508s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-5cbbf5dc58-jhjgq" [4a7b18f0-f7d9-4982-bda8-6575d95ca6fd] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.0224587s
helpers_test.go:176: Cleaning up "skaffold-20210817000749-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p skaffold-20210817000749-111344
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p skaffold-20210817000749-111344: (13.7681691s)
--- PASS: TestSkaffold (174.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (355.86s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.188938468.exe start -p running-upgrade-20210817001515-111344 --memory=2200 --vm-driver=docker
E0817 00:15:20.320068  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:20.325777  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:20.336954  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:20.361622  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:20.402231  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:20.482654  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:20.643642  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:20.965129  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:21.606496  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:22.891139  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:25.451975  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:15:30.572974  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.188938468.exe start -p running-upgrade-20210817001515-111344 --memory=2200 --vm-driver=docker: (3m16.8362709s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20210817001515-111344 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-20210817001515-111344 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m15.7178865s)
helpers_test.go:176: Cleaning up "running-upgrade-20210817001515-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20210817001515-111344

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20210817001515-111344: (22.5627811s)
--- PASS: TestRunningBinaryUpgrade (355.86s)

                                                
                                    
x
+
TestMissingContainerUpgrade (373.79s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.1.157752246.exe start -p missing-upgrade-20210817002111-111344 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.1.157752246.exe start -p missing-upgrade-20210817002111-111344 --memory=2200 --driver=docker: (3m21.411693s)
version_upgrade_test.go:320: (dbg) Run:  docker stop missing-upgrade-20210817002111-111344
version_upgrade_test.go:320: (dbg) Done: docker stop missing-upgrade-20210817002111-111344: (6.8592633s)
version_upgrade_test.go:325: (dbg) Run:  docker rm missing-upgrade-20210817002111-111344
version_upgrade_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20210817002111-111344 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0817 00:25:20.331137  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:331: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-20210817002111-111344 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m23.0420371s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210817002111-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20210817002111-111344

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20210817002111-111344: (20.8038393s)
--- PASS: TestMissingContainerUpgrade (373.79s)

                                                
                                    
x
+
TestPause/serial/Start (248.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20210817001556-111344 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20210817001556-111344 --memory=2048 --install-addons=false --wait=all --driver=docker: (4m8.2224999s)
--- PASS: TestPause/serial/Start (248.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (49.21s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20210817001556-111344 --alsologtostderr -v=1 --driver=docker
E0817 00:20:20.320927  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:20:48.098176  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20210817001556-111344 --alsologtostderr -v=1 --driver=docker: (49.1647799s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (49.21s)

                                                
                                    
x
+
TestPause/serial/Pause (5.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20210817001556-111344 --alsologtostderr -v=5
pause_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20210817001556-111344 --alsologtostderr -v=5: (5.7074433s)
--- PASS: TestPause/serial/Pause (5.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (4.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-20210817001556-111344 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-20210817001556-111344 --output=json --layout=cluster: exit status 2 (4.3140294s)

                                                
                                                
-- stdout --
	{"Name":"pause-20210817001556-111344","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 13 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210817001556-111344","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (4.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (5.29s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-20210817001556-111344 --alsologtostderr -v=5
pause_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-20210817001556-111344 --alsologtostderr -v=5: (5.2912326s)
--- PASS: TestPause/serial/Unpause (5.29s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.46s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20210817001556-111344 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20210817001556-111344 --alsologtostderr -v=5: (5.4622164s)
--- PASS: TestPause/serial/PauseAgain (5.46s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (19.8s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-20210817001556-111344 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-20210817001556-111344 --alsologtostderr -v=5: (19.8008239s)
--- PASS: TestPause/serial/DeletePaused (19.80s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:139: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (15.6049246s)
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210817001556-111344
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210817001556-111344: exit status 1 (536.2874ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210817001556-111344

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (16.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (258.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20210817002204-111344 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20210817002204-111344 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0: (4m18.3866145s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (258.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (284.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20210817002237-111344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20210817002237-111344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.0-rc.0: (4m44.0079818s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (284.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (215.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20210817002328-111344 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.21.3
E0817 00:24:09.138903  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20210817002328-111344 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.21.3: (3m35.3369584s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (215.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (18.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210817002204-111344 create -f testdata\busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [c06a135c-fef1-11eb-be8f-02423db82f6e] Pending
helpers_test.go:343: "busybox" [c06a135c-fef1-11eb-be8f-02423db82f6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [c06a135c-fef1-11eb-be8f-02423db82f6e] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 17.0993912s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210817002204-111344 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (18.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (6.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20210817002204-111344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20210817002204-111344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.5100362s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210817002204-111344 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (6.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (17.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20210817002204-111344 --alsologtostderr -v=3
E0817 00:27:01.079355  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-20210817002204-111344 --alsologtostderr -v=3: (17.9202595s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (17.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (16.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210817002328-111344 create -f testdata\busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Done: kubectl --context embed-certs-20210817002328-111344 create -f testdata\busybox.yaml: (1.1820963s)
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:343: "busybox" [d32ca345-2088-4a4e-91ad-9b78d10c9da9] Pending
helpers_test.go:343: "busybox" [d32ca345-2088-4a4e-91ad-9b78d10c9da9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:343: "busybox" [d32ca345-2088-4a4e-91ad-9b78d10c9da9] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 15.0957829s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210817002328-111344 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (16.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (4.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344: exit status 7 (2.3029648s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20210817002204-111344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20210817002204-111344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.2212644s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (4.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (465.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20210817002204-111344 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20210817002204-111344 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0: (7m40.5299252s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344
start_stop_delete_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344: (4.6641608s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (465.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20210817002328-111344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20210817002328-111344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.8981882s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210817002328-111344 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210817002237-111344 create -f testdata\busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [b19156a2-3967-4f5f-b30b-1c5d2b6bd800] Pending
helpers_test.go:343: "busybox" [b19156a2-3967-4f5f-b30b-1c5d2b6bd800] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [b19156a2-3967-4f5f-b30b-1c5d2b6bd800] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.0561767s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210817002237-111344 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (14.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20210817002328-111344 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-20210817002328-111344 --alsologtostderr -v=3: (17.7669194s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (174.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210817002733-111344 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210817002733-111344 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.21.3: (2m54.9045361s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (174.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20210817002237-111344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20210817002237-111344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.5339193s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210817002237-111344 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (17.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20210817002237-111344 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-20210817002237-111344 --alsologtostderr -v=3: (17.5848622s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (17.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (4.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344: exit status 7 (2.2334805s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20210817002328-111344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20210817002328-111344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.2438552s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (4.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (448.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20210817002328-111344 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20210817002328-111344 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.21.3: (7m22.8614258s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344: (5.2329562s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (448.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (4.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344: exit status 7 (2.1642907s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20210817002237-111344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20210817002237-111344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.1679527s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (4.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (475.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20210817002237-111344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.0-rc.0
E0817 00:28:24.157929  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0817 00:28:52.210914  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0817 00:29:09.149089  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0817 00:30:20.344147  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20210817002237-111344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.0-rc.0: (7m50.9113049s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344: (4.8492798s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (475.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210817002733-111344 create -f testdata\busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [f1681857-8065-4a15-b350-688f8537ed86] Pending
helpers_test.go:343: "busybox" [f1681857-8065-4a15-b350-688f8537ed86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [f1681857-8065-4a15-b350-688f8537ed86] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 12.0908098s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210817002733-111344 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (13.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20210817002733-111344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20210817002733-111344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.9497864s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210817002733-111344 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (16.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20210817002733-111344 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20210817002733-111344 --alsologtostderr -v=3: (16.0635342s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (16.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (4.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344: exit status 7 (2.096366s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20210817002733-111344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20210817002733-111344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.0732967s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (4.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (441.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210817002733-111344 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.21.3
E0817 00:31:43.485687  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
E0817 00:32:01.090265  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0817 00:34:09.161698  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210817002733-111344 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.21.3: (7m15.8254037s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344
start_stop_delete_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344: (5.3645383s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (441.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-hjx2h" [d7a7dfd7-fef2-11eb-bf31-02421fc704d9] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.1203618s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-hjx2h" [d7a7dfd7-fef2-11eb-bf31-02421fc704d9] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0343308s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210817002204-111344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (4.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20210817002204-111344 "sudo crictl images -o json"
start_stop_delete_test.go:277: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20210817002204-111344 "sudo crictl images -o json": (4.1727873s)
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (4.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (29.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20210817002204-111344 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-20210817002204-111344 --alsologtostderr -v=1: (6.544302s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344: exit status 2 (4.461454s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344: exit status 2 (4.2569091s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-20210817002204-111344 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-20210817002204-111344 --alsologtostderr -v=1: (4.9659057s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344: (4.3865072s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20210817002204-111344 -n old-k8s-version-20210817002204-111344: (4.9241631s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (29.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-cbwgk" [a515f5d5-a1df-48f9-9e90-3656c1b4155b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-cbwgk" [a515f5d5-a1df-48f9-9e90-3656c1b4155b] Running
E0817 00:35:20.354227  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.0847014s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-cbwgk" [a515f5d5-a1df-48f9-9e90-3656c1b4155b] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0321295s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210817002328-111344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (4.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20210817002328-111344 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-20210817002328-111344 "sudo crictl images -o json": (4.3068901s)
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (4.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (32.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20210817002328-111344 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-20210817002328-111344 --alsologtostderr -v=1: (8.2089385s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344: exit status 2 (4.3080222s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344: exit status 2 (4.3484692s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-20210817002328-111344 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-20210817002328-111344 --alsologtostderr -v=1: (5.7933982s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344: (5.2288052s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210817002328-111344 -n embed-certs-20210817002328-111344: (4.6282847s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (32.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-922br" [c685dfa9-4635-42af-9735-b6f2ebba22ce] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-922br" [c685dfa9-4635-42af-9735-b6f2ebba22ce] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.0489104s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-922br" [c685dfa9-4635-42af-9735-b6f2ebba22ce] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0395473s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210817002237-111344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (520.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20210817003608-111344 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20210817003608-111344 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.0-rc.0: (8m40.5008072s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (520.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (4.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20210817002237-111344 "sudo crictl images -o json"
start_stop_delete_test.go:277: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-20210817002237-111344 "sudo crictl images -o json": (4.3318664s)
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (4.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (28.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20210817002237-111344 --alsologtostderr -v=1
E0817 00:36:23.488444  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:36:23.494361  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:36:23.505400  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:36:23.526348  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:36:23.567630  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:36:23.648604  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-20210817002237-111344 --alsologtostderr -v=1: (5.8025048s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344
E0817 00:36:23.810937  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:36:24.131322  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:36:24.773734  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:36:26.056387  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344: exit status 2 (4.1555128s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344: exit status 2 (3.9625576s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-20210817002237-111344 --alsologtostderr -v=1
E0817 00:36:33.738968  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-20210817002237-111344 --alsologtostderr -v=1: (4.7522048s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344: (4.3553442s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344
E0817 00:36:43.981309  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20210817002237-111344 -n no-preload-20210817002237-111344: (5.3024996s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (28.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (198.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20210817002157-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-20210817002157-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: (3m18.1285728s)
--- PASS: TestNetworkPlugins/group/auto/Start (198.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (183.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20210817002204-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker
E0817 00:37:22.777147  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:22.783452  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:22.796877  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:22.817752  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:22.859843  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:22.942543  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:23.105526  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:23.427450  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:24.069522  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:25.352201  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:27.912921  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:33.034798  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:43.279618  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
E0817 00:37:45.425800  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:38:03.762967  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p false-20210817002204-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: (3m3.0347405s)
--- PASS: TestNetworkPlugins/group/false/Start (183.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-5j9xj" [95620207-9104-43af-8590-5447decb7481] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-5j9xj" [95620207-9104-43af-8590-5447decb7481] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.1002952s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-5j9xj" [95620207-9104-43af-8590-5447decb7481] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.35716s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210817002733-111344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:264: (dbg) Done: kubectl --context default-k8s-different-port-20210817002733-111344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.6311053s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (4.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20210817002733-111344 "sudo crictl images -o json"
E0817 00:38:44.733491  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:277: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20210817002733-111344 "sudo crictl images -o json": (4.5642791s)
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (4.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (27.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20210817002733-111344 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20210817002733-111344 --alsologtostderr -v=1: (5.2302959s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344: exit status 2 (4.0043714s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344: exit status 2 (4.0021129s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20210817002733-111344 --alsologtostderr -v=1
E0817 00:39:07.350856  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20210817002733-111344 --alsologtostderr -v=1: (5.683282s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344
E0817 00:39:09.174818  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344: (4.4234941s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344
start_stop_delete_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210817002733-111344 -n default-k8s-different-port-20210817002733-111344: (4.3064091s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (27.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (433.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20210817002204-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p cilium-20210817002204-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: (7m13.6983093s)
--- PASS: TestNetworkPlugins/group/cilium/Start (433.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-20210817002157-111344 "pgrep -a kubelet"
net_test.go:119: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-20210817002157-111344 "pgrep -a kubelet": (3.8313101s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210817002157-111344 replace --force -f testdata\netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-bwxx2" [9abee5f3-3e16-4d64-b9c5-226c77aaab21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-bwxx2" [9abee5f3-3e16-4d64-b9c5-226c77aaab21] Running
E0817 00:40:06.659484  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210817002237-111344\client.crt: The system cannot find the path specified.
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.1564316s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210817002157-111344 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210817002157-111344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (4.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-20210817002204-111344 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/KubeletFlags
net_test.go:119: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-20210817002204-111344 "pgrep -a kubelet": (4.1345189s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (4.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210817002157-111344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Non-zero exit: kubectl --context auto-20210817002157-111344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.7349077s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (24.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context false-20210817002204-111344 replace --force -f testdata\netcat-deployment.yaml
net_test.go:131: (dbg) Done: kubectl --context false-20210817002204-111344 replace --force -f testdata\netcat-deployment.yaml: (1.4144647s)
net_test.go:145: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-flsl8" [842ccc2d-332d-4965-861a-7227eb8bf4f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-flsl8" [842ccc2d-332d-4965-861a-7227eb8bf4f1] Running
E0817 00:40:38.982523  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 23.0745099s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (24.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:162: (dbg) Run:  kubectl --context false-20210817002204-111344 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:181: (dbg) Run:  kubectl --context false-20210817002204-111344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:231: (dbg) Run:  kubectl --context false-20210817002204-111344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:231: (dbg) Non-zero exit: kubectl --context false-20210817002204-111344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.7328155s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20210817003608-111344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0817 00:44:56.760268  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:44:56.765840  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:44:56.776792  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:44:56.798126  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:44:56.844096  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:44:56.925375  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:44:57.086290  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:44:57.406536  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:44:58.049429  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20210817003608-111344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (10.3242089s)
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (10.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (19.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20210817003608-111344 --alsologtostderr -v=3
E0817 00:44:59.329710  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:45:01.890579  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:45:04.199032  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
E0817 00:45:07.011313  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:45:17.254336  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:45:18.618765  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:18.631097  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:18.641973  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:18.662483  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:18.710712  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:18.791728  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:201: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-20210817003608-111344 --alsologtostderr -v=3: (19.7454875s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (19.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (4.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344
E0817 00:45:18.956212  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:19.277814  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:19.920200  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:20.376839  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210817000749-111344\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344: exit status 7 (2.215647s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20210817003608-111344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0817 00:45:21.207488  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20210817003608-111344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.1949556s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (4.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (101.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20210817003608-111344 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.0-rc.0
E0817 00:45:23.768792  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:28.741090  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.
E0817 00:45:28.890040  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:32.257495  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210816232348-111344\client.crt: The system cannot find the path specified.
E0817 00:45:37.736422  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:45:39.131179  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:45:56.451679  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\default-k8s-different-port-20210817002733-111344\client.crt: The system cannot find the path specified.
E0817 00:45:59.613292  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:46:18.700342  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
E0817 00:46:23.511130  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:46:40.584004  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\false-20210817002204-111344\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20210817003608-111344 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.0-rc.0: (1m35.6461844s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210817003608-111344 -n newest-cni-20210817003608-111344: (5.896566s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (101.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-zt4nw" [e6d28534-126f-46ed-a6f4-4f547e173b18] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.0682006s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (4.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cilium-20210817002204-111344 "pgrep -a kubelet"
E0817 00:47:01.124685  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
net_test.go:119: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cilium-20210817002204-111344 "pgrep -a kubelet": (4.5223297s)
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (4.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (37.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210817002204-111344 replace --force -f testdata\netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Done: kubectl --context cilium-20210817002204-111344 replace --force -f testdata\netcat-deployment.yaml: (1.4162095s)
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-rbrmf" [bece83f6-a363-430b-a10e-118262b859ad] Pending
helpers_test.go:343: "netcat-66fbc655d5-rbrmf" [bece83f6-a363-430b-a10e-118262b859ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-rbrmf" [bece83f6-a363-430b-a10e-118262b859ad] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 36.0659748s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (37.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (4.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20210817003608-111344 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-20210817003608-111344 "sudo crictl images -o json": (4.4958148s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (4.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (1.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210817002204-111344 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Done: kubectl --context cilium-20210817002204-111344 exec deployment/netcat -- nslookup kubernetes.default: (1.0451602s)
--- PASS: TestNetworkPlugins/group/cilium/DNS (1.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (1.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210817002204-111344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
net_test.go:181: (dbg) Done: kubectl --context cilium-20210817002204-111344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080": (1.4427079s)
--- PASS: TestNetworkPlugins/group/cilium/Localhost (1.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (1.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210817002204-111344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Done: kubectl --context cilium-20210817002204-111344 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": (1.1242685s)
--- PASS: TestNetworkPlugins/group/cilium/HairPin (1.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (206.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20210817002157-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-20210817002157-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: (3m26.9184252s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (206.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (4.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-20210817002157-111344 "pgrep -a kubelet"
net_test.go:119: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-20210817002157-111344 "pgrep -a kubelet": (4.3303718s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (4.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (26.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210817002157-111344 replace --force -f testdata\netcat-deployment.yaml
net_test.go:131: (dbg) Done: kubectl --context enable-default-cni-20210817002157-111344 replace --force -f testdata\netcat-deployment.yaml: (1.015292s)
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-flvmz" [068b8198-fd3a-4143-8325-69899ff0faad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0817 00:51:54.722514  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:54.727607  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:54.738669  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:54.759987  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:54.801152  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:54.881520  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:55.043275  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:55.364424  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:56.006588  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:57.289775  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
E0817 00:51:59.851695  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\cilium-20210817002204-111344\client.crt: The system cannot find the path specified.
helpers_test.go:343: "netcat-66fbc655d5-flvmz" [068b8198-fd3a-4143-8325-69899ff0faad] Running
E0817 00:52:01.135025  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 25.0469948s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (26.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (380.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20210817002157-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-20210817002157-111344 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: (6m20.5274477s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (380.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-20210817002157-111344 "pgrep -a kubelet"
E0817 00:59:56.795407  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\auto-20210817002157-111344\client.crt: The system cannot find the path specified.
net_test.go:119: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-20210817002157-111344 "pgrep -a kubelet": (3.5865267s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kubenet-20210817002157-111344 replace --force -f testdata\netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-8wp75" [f3f7e5de-56db-469d-bd3f-a535811258b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-8wp75" [f3f7e5de-56db-469d-bd3f-a535811258b7] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.0336844s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.96s)

                                                
                                    

Test skip (22/249)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (24.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 56.0719ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-cthtr" [a3b6cbf4-099c-41de-9488-1cd1dfae4d47] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0891595s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-4tqpl" [9e9df820-eede-4eb7-b43e-f6015c45c25e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.078656s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210816231050-111344 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210816231050-111344 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210816231050-111344 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (13.8142634s)
addons_test.go:309: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (24.36s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:42: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20210816232348-111344 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:868: output didn't produce a URL
functional_test.go:862: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20210816232348-111344 --alsologtostderr -v=1] ...
helpers_test.go:489: unable to find parent, assuming dead: process does not exist
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:58: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (27.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210816232348-111344 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210816232348-111344 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-qwdmc" [8b938f55-a834-41e6-be15-223d6097b670] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-qwdmc" [8b938f55-a834-41e6-be15-223d6097b670] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 23.0744351s
functional_test.go:1372: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210816232348-111344 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1372: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210816232348-111344 service list: (4.2589053s)
functional_test.go:1381: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (27.96s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:188: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:77: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (7.58s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210817002725-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20210817002725-111344

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20210817002725-111344: (7.5785778s)
--- SKIP: TestStartStop/group/disable-driver-mounts (7.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (6.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210817002157-111344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20210817002157-111344
E0817 00:22:01.067661  111344 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210816231050-111344\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20210817002157-111344: (6.6809847s)
--- SKIP: TestNetworkPlugins/group/flannel (6.68s)

                                                
                                    
Copied to clipboard