Test Report: Docker_Windows 12425

                    
                      b9d7ac983dd68de861f6c962981dfd25d0b1477c:2021-09-15:20481
                    
                

Test fail (17/232)

x
+
TestCertOptions (558.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20210915202501-22848 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E0915 20:25:46.756068   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:27:09.831013   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:27:35.665923   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 20:28:32.253220   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 20:30:46.754620   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:48: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-20210915202501-22848 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (8m11.6994885s)
cert_options_test.go:59: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20210915202501-22848 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:59: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20210915202501-22848 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (6.3612805s)
cert_options_test.go:74: (dbg) Run:  kubectl --context cert-options-20210915202501-22848 config view
cert_options_test.go:79: apiserver server port incorrect. Output of 'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters:\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 20:32:51 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: cluster_info\n\t    server: https://localhost:57343\n\t  name: cert-options-20210915202501-22848\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 20:32:56 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: cluster_info\n\t    server: https://127.0.0.1:57304\n\t  name: docker-flags-20210915202413-22848\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt\n\t    server: http
s://127.0.0.1:57338\n\t  name: missing-upgrade-20210915202421-22848\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 20:17:15 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: cluster_info\n\t    server: https://127.0.0.1:57020\n\t  name: pause-20210915200708-22848\n\tcontexts:\n\t- context:\n\t    cluster: cert-options-20210915202501-22848\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 20:32:51 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: context_info\n\t    namespace: default\n\t    user: cert-options-20210915202501-22848\n\t  name: cert-options-20210915202501-22848\n\t- context:\n\t    cluster: docker-flags-20210915202413-22848\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 20:32:56 GMT\n\t        provider: minikube.sigs.k8s.io
\n\t        version: v1.23.0\n\t      name: context_info\n\t    namespace: default\n\t    user: docker-flags-20210915202413-22848\n\t  name: docker-flags-20210915202413-22848\n\t- context:\n\t    cluster: missing-upgrade-20210915202421-22848\n\t    user: missing-upgrade-20210915202421-22848\n\t  name: missing-upgrade-20210915202421-22848\n\t- context:\n\t    cluster: pause-20210915200708-22848\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 15 Sep 2021 20:17:15 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.23.0\n\t      name: context_info\n\t    namespace: default\n\t    user: pause-20210915200708-22848\n\t  name: pause-20210915200708-22848\n\tcurrent-context: docker-flags-20210915202413-22848\n\tkind: Config\n\tpreferences: {}\n\tusers:\n\t- name: cert-options-20210915202501-22848\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\cert-options-20210915202501-22848\\client.crt\n\t    client-key: C:\\Users\\jenkins\\mi
nikube-integration\\.minikube\\profiles\\cert-options-20210915202501-22848\\client.key\n\t- name: docker-flags-20210915202413-22848\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\docker-flags-20210915202413-22848\\client.crt\n\t    client-key: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\docker-flags-20210915202413-22848\\client.key\n\t- name: missing-upgrade-20210915202421-22848\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\missing-upgrade-20210915202421-22848\\client.crt\n\t    client-key: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\missing-upgrade-20210915202421-22848\\client.key\n\t- name: pause-20210915200708-22848\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915200708-22848\\client.crt\n\t    client-key: C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915200708-22848\\
client.key\n\n-- /stdout --"
cert_options_test.go:82: *** TestCertOptions FAILED at 2021-09-15 20:33:20.3074433 +0000 GMT m=+7448.685775801
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect cert-options-20210915202501-22848
helpers_test.go:232: (dbg) Done: docker inspect cert-options-20210915202501-22848: (1.0327979s)
helpers_test.go:236: (dbg) docker inspect cert-options-20210915202501-22848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18fc1254e4db0300c1a06b56b45fd17d583a0315bd8f358589c58d3fae871ce4",
	        "Created": "2021-09-15T20:25:26.901697Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 160797,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T20:25:32.4381771Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/18fc1254e4db0300c1a06b56b45fd17d583a0315bd8f358589c58d3fae871ce4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18fc1254e4db0300c1a06b56b45fd17d583a0315bd8f358589c58d3fae871ce4/hostname",
	        "HostsPath": "/var/lib/docker/containers/18fc1254e4db0300c1a06b56b45fd17d583a0315bd8f358589c58d3fae871ce4/hosts",
	        "LogPath": "/var/lib/docker/containers/18fc1254e4db0300c1a06b56b45fd17d583a0315bd8f358589c58d3fae871ce4/18fc1254e4db0300c1a06b56b45fd17d583a0315bd8f358589c58d3fae871ce4-json.log",
	        "Name": "/cert-options-20210915202501-22848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "cert-options-20210915202501-22848:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "cert-options-20210915202501-22848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/eef8a6f7ccb5c8643704566458fded2b99fd2571c4a987baf463133666436a74-init/diff:/var/lib/docker/overlay2/a259804ff45c264548e9459111f8eb7e789339b3253b50b62afde896e9e19e34/diff:/var/lib/docker/overlay2/61882a81480713e64bf02bef67583a0609b2be0589d08187547a88789584af86/diff:/var/lib/docker/overlay2/a41d1f5e24156c1d438fe25c567f3c3492d15cb77b1bf5545be9086be845138a/diff:/var/lib/docker/overlay2/86e30e10438032d0a02b54850ad0316347488f3d5b831234af1e91f943269850/diff:/var/lib/docker/overlay2/f6962936c0c1b0636454847e8e963a472786602e15a00d5e020827c2372acfce/diff:/var/lib/docker/overlay2/5eee83c6029359aefecbba85cc6d456e3a5a97c3ef6e9f4850e8a53c62b30ef5/diff:/var/lib/docker/overlay2/fdaa4e134ab960962e0a388adaa3a6aa59dd139cc016dfd4cdf4565bc80e8469/diff:/var/lib/docker/overlay2/9e1b9be7e17136fa81b0a224e2fab9704d3234ca119d87c14f9a676bbdb023f5/diff:/var/lib/docker/overlay2/ffe06185e93cb7ae8d48d84ea9be8817f2ae3d2aae85114ce41477579e23debd/diff:/var/lib/docker/overlay2/221713
20a621ffe79c2acb0c13308b1b0cd3bc94a4083992e7b8589b820c625c/diff:/var/lib/docker/overlay2/eb2fb3ccafd6cb1c26a9642601357b3e0563e9e9361a5ab359bf1af592a0d709/diff:/var/lib/docker/overlay2/6081368e802a14f6f6a7424eb7af3f5f29f85bf59ed0a0709ce25b53738095cb/diff:/var/lib/docker/overlay2/fd7176e5912a824a0543fa3ab5170921538a287401ff8a451c90e1ef0fd8adea/diff:/var/lib/docker/overlay2/eec5078968f5e7332ff82191a780be0efef38aef75ea7cd67723ab3d2760c281/diff:/var/lib/docker/overlay2/d18d41a44c04cb695c4b69ac0db0d5807cee4ca8a5a695629f97e2d8d9cf9461/diff:/var/lib/docker/overlay2/b125406c01cea6a83fa5515a19bb6822d1194fcd47eeb1ed541b9304804a54be/diff:/var/lib/docker/overlay2/b49ae7a2c3101c5b094f611e08fb7b68d8688cb3c333066f697aafc1dc7c2c7e/diff:/var/lib/docker/overlay2/ce599106d279966257baab0cc43ed0366d690702b449073e812a47ae6698dedf/diff:/var/lib/docker/overlay2/5f005c2e8ab4cd52b59f5118e6f5e352dd834afde547ba1ee7b71141319e3547/diff:/var/lib/docker/overlay2/2b1f9abca5d32e21fe1da66b2604d858599b74fc9359bd55e050cebccaba5c7d/diff:/var/lib/d
ocker/overlay2/a5f956d0de2a0313dfbaefb921518d8a75267b71a9e7c68207a81682db5394b5/diff:/var/lib/docker/overlay2/e0050af32b9eb0f12404cf384139cd48050d4a969d090faaa07b9f42fe954627/diff:/var/lib/docker/overlay2/f18c15fd90b361f7a13265b5426d985a47e261abde790665028916551b5218f3/diff:/var/lib/docker/overlay2/0f266ad6b65c857206fd10e121b74564370ca213f5706493619b6a590c496660/diff:/var/lib/docker/overlay2/fc044060d3681022984120753b0c02afc05afbb256dbdfc9f7f5e966e1d98820/diff:/var/lib/docker/overlay2/91df5011d1388013be2af7bb3097195366fd38d1f46d472e630aab583779f7c0/diff:/var/lib/docker/overlay2/f810a7fbc880b9ff7c367b14e34088e851fa045d860ce4bf4c49999fcf814a6e/diff:/var/lib/docker/overlay2/318584cae4acc059b81627e00ae703167673c73d234d6e64e894fc3500750f90/diff:/var/lib/docker/overlay2/a2e1d86ffb5aec517fe891619294d506621a002f4c53e8d3103d5d4ce777ebaf/diff:/var/lib/docker/overlay2/12fd1d215a6881aa03a06f2b8a5415b483530db121b120b66940e1e5cd2e1b96/diff:/var/lib/docker/overlay2/28bbbfc0404aecb7d7d79b4c2bfec07cd44260c922a982af523bda70bbd
7be20/diff:/var/lib/docker/overlay2/4dc0077174d58a8904abddfc67a48e6dd082a1eebc72518af19da37b4eff7b2c/diff:/var/lib/docker/overlay2/4d39db844b44258dbb67b16662175b453df7bfd43274abbf1968486539955750/diff:/var/lib/docker/overlay2/ca34d73c6c31358a3eb714a014a5961863e05dee505a1cfca2c8829380ce362b/diff:/var/lib/docker/overlay2/0c0595112799a0b3604c58158946fb3d0657c4198a6a72e12fbe29a74174d3ea/diff:/var/lib/docker/overlay2/5fc43276da56e90293816918613014e7cec7bedc292a062d39d034c95d56351d/diff:/var/lib/docker/overlay2/71a282cb60752128ee370ced1695c67c421341d364956818e5852fd6714a0e64/diff:/var/lib/docker/overlay2/07723c7054e35caae4987fa66d3d1fd44de0d2875612274dde2bf04e8349b0a0/diff:/var/lib/docker/overlay2/0433db88749fb49b0f02cc65b7113c97134270991a8a82bbe7ff4432aae7e502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eef8a6f7ccb5c8643704566458fded2b99fd2571c4a987baf463133666436a74/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eef8a6f7ccb5c8643704566458fded2b99fd2571c4a987baf463133666436a74/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eef8a6f7ccb5c8643704566458fded2b99fd2571c4a987baf463133666436a74/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "cert-options-20210915202501-22848",
	                "Source": "/var/lib/docker/volumes/cert-options-20210915202501-22848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "cert-options-20210915202501-22848",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8555/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "cert-options-20210915202501-22848",
	                "name.minikube.sigs.k8s.io": "cert-options-20210915202501-22848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9af57c4fe7c8b894b753781159e9494bc9a2a6e28601a33b4b47764188634b54",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57339"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57340"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57341"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57342"
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57343"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9af57c4fe7c8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "cert-options-20210915202501-22848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "18fc1254e4db",
	                        "cert-options-20210915202501-22848"
	                    ],
	                    "NetworkID": "68d2a14e215a557d84dc4fe81f7d14978d10b954a25cf03cd9d584ff332d7512",
	                    "EndpointID": "a17d9fce11b967dceca787eb1b7bd8f2a738b418dd5ccfebc69bcf298d66745d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20210915202501-22848 -n cert-options-20210915202501-22848
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20210915202501-22848 -n cert-options-20210915202501-22848: (8.4443282s)
helpers_test.go:245: <<< TestCertOptions FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestCertOptions]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20210915202501-22848 logs -n 25
E0915 20:33:32.251070   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20210915202501-22848 logs -n 25: (12.6908007s)
helpers_test.go:253: TestCertOptions logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                    |                  Profile                  |          User           | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                        | insufficient-storage-20210915200614-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:06:56 GMT | Wed, 15 Sep 2021 20:07:07 GMT |
	|         | insufficient-storage-20210915200614-22848 |                                           |                         |         |                               |                               |
	| start   | -p pause-20210915200708-22848             | pause-20210915200708-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:07:08 GMT | Wed, 15 Sep 2021 20:15:59 GMT |
	|         | --memory=2048                             |                                           |                         |         |                               |                               |
	|         | --install-addons=false                    |                                           |                         |         |                               |                               |
	|         | --wait=all --driver=docker                |                                           |                         |         |                               |                               |
	| start   | -p pause-20210915200708-22848             | pause-20210915200708-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:16:00 GMT | Wed, 15 Sep 2021 20:17:33 GMT |
	|         | --alsologtostderr -v=1                    |                                           |                         |         |                               |                               |
	|         | --driver=docker                           |                                           |                         |         |                               |                               |
	| pause   | -p pause-20210915200708-22848             | pause-20210915200708-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:17:33 GMT | Wed, 15 Sep 2021 20:17:43 GMT |
	|         | --alsologtostderr -v=5                    |                                           |                         |         |                               |                               |
	| start   | -p                                        | offline-docker-20210915200708-22848       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:07:08 GMT | Wed, 15 Sep 2021 20:18:07 GMT |
	|         | offline-docker-20210915200708-22848       |                                           |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                    |                                           |                         |         |                               |                               |
	|         | --memory=2048 --wait=true                 |                                           |                         |         |                               |                               |
	|         | --driver=docker                           |                                           |                         |         |                               |                               |
	| unpause | -p pause-20210915200708-22848             | pause-20210915200708-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:18:21 GMT | Wed, 15 Sep 2021 20:18:30 GMT |
	|         | --alsologtostderr -v=5                    |                                           |                         |         |                               |                               |
	| delete  | -p                                        | offline-docker-20210915200708-22848       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:18:08 GMT | Wed, 15 Sep 2021 20:18:33 GMT |
	|         | offline-docker-20210915200708-22848       |                                           |                         |         |                               |                               |
	| pause   | -p pause-20210915200708-22848             | pause-20210915200708-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:18:30 GMT | Wed, 15 Sep 2021 20:18:43 GMT |
	|         | --alsologtostderr -v=5                    |                                           |                         |         |                               |                               |
	| start   | -p                                        | running-upgrade-20210915200708-22848      | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:19:14 GMT | Wed, 15 Sep 2021 20:23:00 GMT |
	|         | running-upgrade-20210915200708-22848      |                                           |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr -v=1      |                                           |                         |         |                               |                               |
	|         | --driver=docker                           |                                           |                         |         |                               |                               |
	| delete  | -p                                        | running-upgrade-20210915200708-22848      | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:23:01 GMT | Wed, 15 Sep 2021 20:23:29 GMT |
	|         | running-upgrade-20210915200708-22848      |                                           |                         |         |                               |                               |
	| start   | -p                                        | stopped-upgrade-20210915200708-22848      | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:20:05 GMT | Wed, 15 Sep 2021 20:23:29 GMT |
	|         | stopped-upgrade-20210915200708-22848      |                                           |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr -v=1      |                                           |                         |         |                               |                               |
	|         | --driver=docker                           |                                           |                         |         |                               |                               |
	| delete  | -p                                        | flannel-20210915202329-22848              | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:23:29 GMT | Wed, 15 Sep 2021 20:23:38 GMT |
	|         | flannel-20210915202329-22848              |                                           |                         |         |                               |                               |
	| logs    | -p                                        | stopped-upgrade-20210915200708-22848      | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:23:30 GMT | Wed, 15 Sep 2021 20:23:46 GMT |
	|         | stopped-upgrade-20210915200708-22848      |                                           |                         |         |                               |                               |
	| delete  | -p                                        | stopped-upgrade-20210915200708-22848      | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:23:46 GMT | Wed, 15 Sep 2021 20:24:13 GMT |
	|         | stopped-upgrade-20210915200708-22848      |                                           |                         |         |                               |                               |
	| start   | -p                                        | force-systemd-flag-20210915201833-22848   | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:18:33 GMT | Wed, 15 Sep 2021 20:24:20 GMT |
	|         | force-systemd-flag-20210915201833-22848   |                                           |                         |         |                               |                               |
	|         | --memory=2048 --force-systemd             |                                           |                         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker    |                                           |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210915201833-22848   | force-systemd-flag-20210915201833-22848   | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:24:20 GMT | Wed, 15 Sep 2021 20:24:29 GMT |
	|         | ssh docker info --format                  |                                           |                         |         |                               |                               |
	|         | {{.CgroupDriver}}                         |                                           |                         |         |                               |                               |
	| delete  | -p                                        | force-systemd-flag-20210915201833-22848   | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:24:30 GMT | Wed, 15 Sep 2021 20:25:01 GMT |
	|         | force-systemd-flag-20210915201833-22848   |                                           |                         |         |                               |                               |
	| start   | -p                                        | force-systemd-env-20210915202338-22848    | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:23:38 GMT | Wed, 15 Sep 2021 20:32:27 GMT |
	|         | force-systemd-env-20210915202338-22848    |                                           |                         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr -v=5      |                                           |                         |         |                               |                               |
	|         | --driver=docker                           |                                           |                         |         |                               |                               |
	| -p      | force-systemd-env-20210915202338-22848    | force-systemd-env-20210915202338-22848    | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:32:28 GMT | Wed, 15 Sep 2021 20:32:42 GMT |
	|         | ssh docker info --format                  |                                           |                         |         |                               |                               |
	|         | {{.CgroupDriver}}                         |                                           |                         |         |                               |                               |
	| start   | -p                                        | docker-flags-20210915202413-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:24:14 GMT | Wed, 15 Sep 2021 20:33:02 GMT |
	|         | docker-flags-20210915202413-22848         |                                           |                         |         |                               |                               |
	|         | --cache-images=false                      |                                           |                         |         |                               |                               |
	|         | --memory=2048                             |                                           |                         |         |                               |                               |
	|         | --install-addons=false                    |                                           |                         |         |                               |                               |
	|         | --wait=false --docker-env=FOO=BAR         |                                           |                         |         |                               |                               |
	|         | --docker-env=BAZ=BAT                      |                                           |                         |         |                               |                               |
	|         | --docker-opt=debug                        |                                           |                         |         |                               |                               |
	|         | --docker-opt=icc=true                     |                                           |                         |         |                               |                               |
	|         | --alsologtostderr -v=5                    |                                           |                         |         |                               |                               |
	|         | --driver=docker                           |                                           |                         |         |                               |                               |
	| -p      | docker-flags-20210915202413-22848         | docker-flags-20210915202413-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:33:03 GMT | Wed, 15 Sep 2021 20:33:08 GMT |
	|         | ssh sudo systemctl show docker            |                                           |                         |         |                               |                               |
	|         | --property=Environment --no-pager         |                                           |                         |         |                               |                               |
	| start   | -p                                        | cert-options-20210915202501-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:25:02 GMT | Wed, 15 Sep 2021 20:33:13 GMT |
	|         | cert-options-20210915202501-22848         |                                           |                         |         |                               |                               |
	|         | --memory=2048                             |                                           |                         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                 |                                           |                         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15             |                                           |                         |         |                               |                               |
	|         | --apiserver-names=localhost               |                                           |                         |         |                               |                               |
	|         | --apiserver-names=www.google.com          |                                           |                         |         |                               |                               |
	|         | --apiserver-port=8555                     |                                           |                         |         |                               |                               |
	|         | --driver=docker                           |                                           |                         |         |                               |                               |
	|         | --apiserver-name=localhost                |                                           |                         |         |                               |                               |
	| -p      | docker-flags-20210915202413-22848         | docker-flags-20210915202413-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:33:09 GMT | Wed, 15 Sep 2021 20:33:15 GMT |
	|         | ssh sudo systemctl show docker            |                                           |                         |         |                               |                               |
	|         | --property=ExecStart --no-pager           |                                           |                         |         |                               |                               |
	| delete  | -p                                        | force-systemd-env-20210915202338-22848    | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:32:42 GMT | Wed, 15 Sep 2021 20:33:15 GMT |
	|         | force-systemd-env-20210915202338-22848    |                                           |                         |         |                               |                               |
	| -p      | cert-options-20210915202501-22848         | cert-options-20210915202501-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:33:14 GMT | Wed, 15 Sep 2021 20:33:19 GMT |
	|         | ssh openssl x509 -text -noout -in         |                                           |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt     |                                           |                         |         |                               |                               |
	|---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 20:33:16
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 20:33:16.184661   84484 out.go:298] Setting OutFile to fd 2512 ...
	I0915 20:33:16.185659   84484 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 20:33:16.186674   84484 out.go:311] Setting ErrFile to fd 2588...
	I0915 20:33:16.186674   84484 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 20:33:16.211666   84484 out.go:305] Setting JSON to false
	I0915 20:33:16.219688   84484 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9157469,"bootTime":1622580527,"procs":164,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 20:33:16.219688   84484 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 20:33:16.220690   84484 out.go:177] * [kubernetes-upgrade-20210915203315-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 20:33:16.220690   84484 notify.go:169] Checking for updates...
	I0915 20:33:16.220690   84484 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 20:33:16.229869   84484 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	W0915 20:33:13.507936   25104 cli_runner.go:162] docker network inspect missing-upgrade-20210915202421-22848 returned with exit code 1
	I0915 20:33:13.507936   25104 network_create.go:258] error running [docker network inspect missing-upgrade-20210915202421-22848]: docker network inspect missing-upgrade-20210915202421-22848: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: missing-upgrade-20210915202421-22848
	I0915 20:33:13.507936   25104 network_create.go:260] output of [docker network inspect missing-upgrade-20210915202421-22848]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: missing-upgrade-20210915202421-22848
	
	** /stderr **
	I0915 20:33:13.517951   25104 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 20:33:14.412159   25104 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006270] misses:0}
	I0915 20:33:14.412159   25104 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 20:33:14.412159   25104 network_create.go:106] attempt to create docker network missing-upgrade-20210915202421-22848 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 20:33:14.424162   25104 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20210915202421-22848
	W0915 20:33:15.245647   25104 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20210915202421-22848 returned with exit code 1
	W0915 20:33:15.245842   25104 network_create.go:98] failed to create docker network missing-upgrade-20210915202421-22848 192.168.49.0/24, will retry: subnet is taken
	I0915 20:33:15.272298   25104 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006270] amended:false}} dirty:map[] misses:0}
	I0915 20:33:15.272885   25104 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 20:33:15.298302   25104 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006270] amended:true}} dirty:map[192.168.49.0:0xc000006270 192.168.58.0:0xc0000063c8] misses:0}
	I0915 20:33:15.298302   25104 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 20:33:15.298302   25104 network_create.go:106] attempt to create docker network missing-upgrade-20210915202421-22848 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0915 20:33:15.316716   25104 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20210915202421-22848
	W0915 20:33:16.163656   25104 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20210915202421-22848 returned with exit code 1
	W0915 20:33:16.163656   25104 network_create.go:98] failed to create docker network missing-upgrade-20210915202421-22848 192.168.58.0/24, will retry: subnet is taken
	I0915 20:33:16.181662   25104 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006270] amended:true}} dirty:map[192.168.49.0:0xc000006270 192.168.58.0:0xc0000063c8] misses:1}
	I0915 20:33:16.181662   25104 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 20:33:16.195770   25104 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006270] amended:true}} dirty:map[192.168.49.0:0xc000006270 192.168.58.0:0xc0000063c8 192.168.67.0:0xc0003827b0] misses:1}
	I0915 20:33:16.195770   25104 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 20:33:16.195770   25104 network_create.go:106] attempt to create docker network missing-upgrade-20210915202421-22848 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0915 20:33:16.211666   25104 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20210915202421-22848
	I0915 20:33:17.550592   25104 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20210915202421-22848: (1.3379531s)
	I0915 20:33:17.550592   25104 network_create.go:90] docker network missing-upgrade-20210915202421-22848 192.168.67.0/24 created
	I0915 20:33:17.550592   25104 kic.go:106] calculated static IP "192.168.67.2" for the "missing-upgrade-20210915202421-22848" container
	I0915 20:33:17.587957   25104 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0915 20:33:16.232688   84484 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 20:33:16.237659   84484 config.go:177] Loaded profile config "cert-options-20210915202501-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 20:33:16.237659   84484 config.go:177] Loaded profile config "docker-flags-20210915202413-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 20:33:16.238685   84484 config.go:177] Loaded profile config "missing-upgrade-20210915202421-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0915 20:33:16.239653   84484 config.go:177] Loaded profile config "pause-20210915200708-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 20:33:16.241658   84484 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 20:33:18.596755   84484 docker.go:132] docker version: linux-20.10.5
	I0915 20:33:18.607738   84484 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 20:33:20.180314   84484 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.5725865s)
	I0915 20:33:20.182311   84484 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:69 SystemTime:2021-09-15 20:33:19.4442539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 20:33:20.186183   84484 out.go:177] * Using the docker driver based on user configuration
	I0915 20:33:20.186183   84484 start.go:278] selected driver: docker
	I0915 20:33:20.186183   84484 start.go:751] validating driver "docker" against <nil>
	I0915 20:33:20.186583   84484 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 20:33:20.335434   84484 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 20:33:22.062775   84484 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.7273523s)
	I0915 20:33:22.063620   84484 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:true NGoroutines:88 SystemTime:2021-09-15 20:33:21.2670988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 20:33:22.063935   84484 start_flags.go:264] no existing cluster config was found, will generate one from the flags 
	I0915 20:33:22.064353   84484 start_flags.go:719] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 20:33:22.064353   84484 cni.go:93] Creating CNI manager for ""
	I0915 20:33:22.064353   84484 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 20:33:22.064353   84484 start_flags.go:278] config:
	{Name:kubernetes-upgrade-20210915203315-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210915203315-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 20:33:22.064353   84484 out.go:177] * Starting control plane node kubernetes-upgrade-20210915203315-22848 in cluster kubernetes-upgrade-20210915203315-22848
	I0915 20:33:22.064353   84484 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 20:33:22.064353   84484 out.go:177] * Pulling base image ...
	I0915 20:33:22.064353   84484 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 20:33:22.064353   84484 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 20:33:22.064353   84484 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0915 20:33:22.064353   84484 cache.go:57] Caching tarball of preloaded images
	I0915 20:33:22.064353   84484 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 20:33:22.064353   84484 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0915 20:33:22.071238   84484 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\kubernetes-upgrade-20210915203315-22848\config.json ...
	I0915 20:33:22.071238   84484 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\kubernetes-upgrade-20210915203315-22848\config.json: {Name:mke54b38418203b5d6dced4dbad9cdb3b7980054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 20:33:22.933167   84484 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
	I0915 20:33:22.933167   84484 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
	I0915 20:33:22.933762   84484 cache.go:206] Successfully downloaded all kic artifacts
	I0915 20:33:22.933952   84484 start.go:313] acquiring machines lock for kubernetes-upgrade-20210915203315-22848: {Name:mke5bb5a1d11ad10a34d5d3dd0f81b170f08fb85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 20:33:22.934439   84484 start.go:317] acquired machines lock for "kubernetes-upgrade-20210915203315-22848" in 487.3µs
	I0915 20:33:22.934816   84484 start.go:89] Provisioning new machine with config: &{Name:kubernetes-upgrade-20210915203315-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210915203315-22848 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0915 20:33:22.935114   84484 start.go:126] createHost starting for "" (driver="docker")
	I0915 20:33:18.438754   25104 cli_runner.go:115] Run: docker volume create missing-upgrade-20210915202421-22848 --label name.minikube.sigs.k8s.io=missing-upgrade-20210915202421-22848 --label created_by.minikube.sigs.k8s.io=true
	I0915 20:33:19.215730   25104 oci.go:102] Successfully created a docker volume missing-upgrade-20210915202421-22848
	I0915 20:33:19.237507   25104 cli_runner.go:115] Run: docker run --rm --name missing-upgrade-20210915202421-22848-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20210915202421-22848 --entrypoint /usr/bin/test -v missing-upgrade-20210915202421-22848:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib
	I0915 20:33:22.938696   84484 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0915 20:33:22.940068   84484 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20210915203315-22848" (driver="docker")
	I0915 20:33:22.940240   84484 client.go:168] LocalClient.Create starting
	I0915 20:33:22.941565   84484 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0915 20:33:22.942132   84484 main.go:130] libmachine: Decoding PEM data...
	I0915 20:33:22.942321   84484 main.go:130] libmachine: Parsing certificate...
	I0915 20:33:22.942940   84484 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0915 20:33:22.943554   84484 main.go:130] libmachine: Decoding PEM data...
	I0915 20:33:22.943554   84484 main.go:130] libmachine: Parsing certificate...
	I0915 20:33:22.963722   84484 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210915203315-22848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 20:33:23.823384   84484 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210915203315-22848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 20:33:23.834763   84484 network_create.go:255] running [docker network inspect kubernetes-upgrade-20210915203315-22848] to gather additional debugging logs...
	I0915 20:33:23.834763   84484 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210915203315-22848
	W0915 20:33:24.680636   84484 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210915203315-22848 returned with exit code 1
	I0915 20:33:24.681024   84484 network_create.go:258] error running [docker network inspect kubernetes-upgrade-20210915203315-22848]: docker network inspect kubernetes-upgrade-20210915203315-22848: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20210915203315-22848
	I0915 20:33:24.681214   84484 network_create.go:260] output of [docker network inspect kubernetes-upgrade-20210915203315-22848]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20210915203315-22848
	
	** /stderr **
	I0915 20:33:24.691294   84484 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 20:33:25.581761   84484 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00094e260] misses:0}
	I0915 20:33:25.582201   84484 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 20:33:25.582201   84484 network_create.go:106] attempt to create docker network kubernetes-upgrade-20210915203315-22848 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 20:33:25.597522   84484 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20210915203315-22848
	I0915 20:33:25.307532   25104 cli_runner.go:168] Completed: docker run --rm --name missing-upgrade-20210915202421-22848-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20210915202421-22848 --entrypoint /usr/bin/test -v missing-upgrade-20210915202421-22848:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib: (6.0699474s)
	I0915 20:33:25.307637   25104 oci.go:106] Successfully prepared a docker volume missing-upgrade-20210915202421-22848
	I0915 20:33:25.308017   25104 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I0915 20:33:25.348221   25104 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 20:33:26.996811   25104 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.6484699s)
	I0915 20:33:26.996811   25104 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:75 SystemTime:2021-09-15 20:33:26.1785579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 20:33:27.009797   25104 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-09-15 20:25:36 UTC, end at Wed 2021-09-15 20:33:38 UTC. --
	Sep 15 20:30:36 cert-options-20210915202501-22848 dockerd[468]: time="2021-09-15T20:30:36.960715900Z" level=info msg="Processing signal 'terminated'"
	Sep 15 20:30:36 cert-options-20210915202501-22848 dockerd[468]: time="2021-09-15T20:30:36.992561900Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 15 20:30:36 cert-options-20210915202501-22848 dockerd[468]: time="2021-09-15T20:30:36.997775900Z" level=info msg="Daemon shutdown complete"
	Sep 15 20:30:37 cert-options-20210915202501-22848 systemd[1]: docker.service: Succeeded.
	Sep 15 20:30:37 cert-options-20210915202501-22848 systemd[1]: Stopped Docker Application Container Engine.
	Sep 15 20:30:37 cert-options-20210915202501-22848 systemd[1]: Starting Docker Application Container Engine...
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.354885500Z" level=info msg="Starting up"
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.365639700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.365679400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.365722000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.365740200Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.372900200Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.373271300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.373310800Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.373326500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.410486600Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 15 20:30:37 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:37.485584600Z" level=info msg="Loading containers: start."
	Sep 15 20:30:38 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:38.680656800Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 15 20:30:39 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:39.042246100Z" level=info msg="Loading containers: done."
	Sep 15 20:30:39 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:39.202800400Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
	Sep 15 20:30:39 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:39.203720700Z" level=info msg="Daemon has completed initialization"
	Sep 15 20:30:39 cert-options-20210915202501-22848 systemd[1]: Started Docker Application Container Engine.
	Sep 15 20:30:39 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:39.605625200Z" level=info msg="API listen on [::]:2376"
	Sep 15 20:30:39 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:30:39.665603100Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 15 20:32:09 cert-options-20210915202501-22848 dockerd[779]: time="2021-09-15T20:32:09.687229200Z" level=info msg="ignoring event" container=181000836a33cc82efff2410d5abbccca3eab598baa3fb2597d2e52f08b95720 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	3a0a53c101635       8d147537fb7d1       12 seconds ago       Running             coredns                   0                   57dcd5b65d006
	d47d0abd865e9       6e38f40d628db       20 seconds ago       Running             storage-provisioner       0                   ac541741b63c1
	b83643181d2c5       36c4ebbc9d979       22 seconds ago       Running             kube-proxy                0                   21c06b15319bc
	34599660af1b0       6e002eb89a881       About a minute ago   Running             kube-controller-manager   1                   044dd5bc3a00b
	0eed1b850b347       aca5ededae9c8       2 minutes ago        Running             kube-scheduler            0                   572c2ed575eb8
	181000836a33c       6e002eb89a881       2 minutes ago        Exited              kube-controller-manager   0                   044dd5bc3a00b
	13e85a8e932d2       f30469a2491a5       2 minutes ago        Running             kube-apiserver            0                   27e045d9e301c
	a86db474fd3eb       0048118155842       2 minutes ago        Running             etcd                      0                   9666b129cc8cb
	
	* 
	* ==> coredns [3a0a53c10163] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* Name:               cert-options-20210915202501-22848
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=cert-options-20210915202501-22848
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d321606059ead2904f4f5ddd59a9a7026c7ee04
	                    minikube.k8s.io/name=cert-options-20210915202501-22848
	                    minikube.k8s.io/updated_at=2021_09_15T20_32_44_0700
	                    minikube.k8s.io/version=v1.23.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Sep 2021 20:32:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  cert-options-20210915202501-22848
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 15 Sep 2021 20:33:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Sep 2021 20:33:09 +0000   Wed, 15 Sep 2021 20:32:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Sep 2021 20:33:09 +0000   Wed, 15 Sep 2021 20:32:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Sep 2021 20:33:09 +0000   Wed, 15 Sep 2021 20:32:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 15 Sep 2021 20:33:09 +0000   Wed, 15 Sep 2021 20:33:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    cert-options-20210915202501-22848
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b5e5cdd53d44f5ab575bb522d42acca
	  System UUID:                3e66b93a-1e96-43c0-afbc-2b516574f845
	  Boot ID:                    7b7b18db-3e3e-49d3-a2cb-ac38329b7bd9
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.22.1
	  Kube-Proxy Version:         v1.22.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-h2n6w                                     100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     35s
	  kube-system                 etcd-cert-options-20210915202501-22848                       100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         29s
	  kube-system                 kube-apiserver-cert-options-20210915202501-22848             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-controller-manager-cert-options-20210915202501-22848    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-proxy-nbv99                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 kube-scheduler-cert-options-20210915202501-22848             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (18%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 47s   kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s   kubelet  Node cert-options-20210915202501-22848 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s   kubelet  Node cert-options-20210915202501-22848 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s   kubelet  Node cert-options-20210915202501-22848 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             41s   kubelet  Node cert-options-20210915202501-22848 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  31s   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                30s   kubelet  Node cert-options-20210915202501-22848 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000000]  hrtimer_interrupt+0x92/0x165
	[  +0.000000]  hv_stimer0_isr+0x20/0x2d
	[  +0.000000]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000000]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000000]  </IRQ>
	[  +0.000000] RIP: 0010:arch_local_irq_enable+0x7/0x8
	[  +0.000000] Code: ef ff ff 0f 20 d8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 fb 66 0f 1f 44 00 00 <c3> 0f 1f 44 00 00 40 f6 c7 02 74 12 48 b8 ff 0f 00 00 00 00 f0 ff
	[  +0.000000] RSP: 0000:ffffbcaf423f7ee0 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff12
	[  +0.000000] RAX: 0000000080000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000000] RDX: 000055a9735499db RSI: 0000000000000004 RDI: ffffbcaf423f7f58
	[  +0.000000] RBP: ffffbcaf423f7f58 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000004
	[  +0.000000] R13: 000055a9735499db R14: ffff97d483b18dc0 R15: ffff97d4e4dc7400
	[  +0.000000]  __do_page_fault+0x17f/0x42d
	[  +0.000000]  ? page_fault+0x8/0x30
	[  +0.000000]  page_fault+0x1e/0x30
	[  +0.000000] RIP: 0033:0x55a9730c8f03
	[  +0.000000] Code: 0f 6f d9 66 0f ef 0d ec 85 97 00 66 0f ef 15 f4 85 97 00 66 0f ef 1d fc 85 97 00 66 0f 38 dc c9 66 0f 38 dc d2 66 0f 38 dc db <f3> 0f 6f 20 f3 0f 6f 68 10 f3 0f 6f 74 08 e0 f3 0f 6f 7c 08 f0 66
	[  +0.000000] RSP: 002b:000000c00004bdc8 EFLAGS: 00010287
	[  +0.000000] RAX: 000055a9735499db RBX: 000055a9730cb860 RCX: 0000000000000022
	[  +0.000000] RDX: 000000c00004bde0 RSI: 000000c00004be48 RDI: 000000c000080868
	[  +0.000000] RBP: 000000c00004be28 R08: 000055a97353d681 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000004 R11: 000000c0000807d0 R12: 000000000000001a
	[  +0.000000] R13: 0000000000000006 R14: 0000000000000008 R15: 0000000000000017
	[  +0.000000] ---[ end trace cdbbbbc925f6eff0 ]---
	
	* 
	* ==> etcd [a86db474fd3e] <==
	* {"level":"warn","ts":"2021-09-15T20:32:44.448Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.255ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238505856976812706 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" mod_revision:284 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" value_size:168 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-09-15T20:32:44.448Z","caller":"traceutil/trace.go:171","msg":"trace[2059151567] linearizableReadLoop","detail":"{readStateIndex:295; appliedIndex:294; }","duration":"125.2239ms","start":"2021-09-15T20:32:44.323Z","end":"2021-09-15T20:32:44.448Z","steps":["trace[2059151567] 'read index received'  (duration: 1.1026ms)","trace[2059151567] 'applied index is now lower than readState.Index'  (duration: 124.1195ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T20:32:44.448Z","caller":"traceutil/trace.go:171","msg":"trace[1417659081] transaction","detail":"{read_only:false; response_revision:287; number_of_response:1; }","duration":"242.2229ms","start":"2021-09-15T20:32:44.206Z","end":"2021-09-15T20:32:44.448Z","steps":["trace[1417659081] 'process raft request'  (duration: 139.9146ms)","trace[1417659081] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/serviceaccounts/kube-system/bootstrap-signer; req_size:227; } (duration: 101.2888ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T20:32:44.461Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.1024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T20:32:44.461Z","caller":"traceutil/trace.go:171","msg":"trace[62791144] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:0; response_revision:287; }","duration":"112.1683ms","start":"2021-09-15T20:32:44.349Z","end":"2021-09-15T20:32:44.461Z","steps":["trace[62791144] 'agreement among raft nodes before linearized reading'  (duration: 112.0845ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T20:32:44.461Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.8924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T20:32:44.461Z","caller":"traceutil/trace.go:171","msg":"trace[1007801193] range","detail":"{range_begin:/registry/podsecuritypolicy/; range_end:/registry/podsecuritypolicy0; response_count:0; response_revision:287; }","duration":"111.9308ms","start":"2021-09-15T20:32:44.349Z","end":"2021-09-15T20:32:44.461Z","steps":["trace[1007801193] 'agreement among raft nodes before linearized reading'  (duration: 111.8735ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T20:32:44.461Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"160.6979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" ","response":"range_response_count:1 size:234"}
	{"level":"info","ts":"2021-09-15T20:32:44.461Z","caller":"traceutil/trace.go:171","msg":"trace[880189914] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslicemirroring-controller; range_end:; response_count:1; response_revision:287; }","duration":"161.2847ms","start":"2021-09-15T20:32:44.300Z","end":"2021-09-15T20:32:44.461Z","steps":["trace[880189914] 'agreement among raft nodes before linearized reading'  (duration: 160.616ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T20:32:57.760Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.3659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2021-09-15T20:32:57.760Z","caller":"traceutil/trace.go:171","msg":"trace[815592747] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:332; }","duration":"116.5058ms","start":"2021-09-15T20:32:57.644Z","end":"2021-09-15T20:32:57.760Z","steps":["trace[815592747] 'agreement among raft nodes before linearized reading'  (duration: 41.9733ms)","trace[815592747] 'range keys from in-memory index tree'  (duration: 68.3181ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T20:32:57.857Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"186.4186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2021-09-15T20:32:57.858Z","caller":"traceutil/trace.go:171","msg":"trace[316722377] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:332; }","duration":"223.8494ms","start":"2021-09-15T20:32:57.634Z","end":"2021-09-15T20:32:57.858Z","steps":["trace[316722377] 'agreement among raft nodes before linearized reading'  (duration: 28.51ms)","trace[316722377] 'range keys from in-memory index tree'  (duration: 156.818ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T20:33:01.021Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.8433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2021-09-15T20:33:01.039Z","caller":"traceutil/trace.go:171","msg":"trace[1830635816] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:369; }","duration":"128.4562ms","start":"2021-09-15T20:33:00.910Z","end":"2021-09-15T20:33:01.039Z","steps":["trace[1830635816] 'agreement among raft nodes before linearized reading'  (duration: 22.0249ms)","trace[1830635816] 'get authentication metadata'  (duration: 82.9124ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T20:33:01.734Z","caller":"traceutil/trace.go:171","msg":"trace[1846979718] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"154.8333ms","start":"2021-09-15T20:33:01.579Z","end":"2021-09-15T20:33:01.734Z","steps":["trace[1846979718] 'process raft request'  (duration: 59.1533ms)","trace[1846979718] 'compare'  (duration: 76.5461ms)","trace[1846979718] 'marshal mvccpb.KeyValue' {req_type:put; key:/registry/serviceaccounts/kube-system/cronjob-controller; req_size:185; } (duration: 15.0738ms)"],"step_count":3}
	{"level":"warn","ts":"2021-09-15T20:33:03.701Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"206.6381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:269"}
	{"level":"info","ts":"2021-09-15T20:33:03.704Z","caller":"traceutil/trace.go:171","msg":"trace[255449043] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:398; }","duration":"209.2165ms","start":"2021-09-15T20:33:03.495Z","end":"2021-09-15T20:33:03.704Z","steps":["trace[255449043] 'agreement among raft nodes before linearized reading'  (duration: 61.6909ms)","trace[255449043] 'range keys from bolt db'  (duration: 144.8926ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T20:33:04.177Z","caller":"traceutil/trace.go:171","msg":"trace[2041936905] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"108.0473ms","start":"2021-09-15T20:33:04.069Z","end":"2021-09-15T20:33:04.177Z","steps":["trace[2041936905] 'process raft request'  (duration: 35.2488ms)","trace[2041936905] 'compare'  (duration: 54.8224ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T20:33:04.188Z","caller":"traceutil/trace.go:171","msg":"trace[906213293] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"115.6473ms","start":"2021-09-15T20:33:04.072Z","end":"2021-09-15T20:33:04.187Z","steps":["trace[906213293] 'process raft request'  (duration: 104.8185ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T20:33:04.188Z","caller":"traceutil/trace.go:171","msg":"trace[1267883987] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"116.2484ms","start":"2021-09-15T20:33:04.072Z","end":"2021-09-15T20:33:04.188Z","steps":["trace[1267883987] 'process raft request'  (duration: 104.5078ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T20:33:04.204Z","caller":"traceutil/trace.go:171","msg":"trace[1083793347] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"206.0219ms","start":"2021-09-15T20:33:03.998Z","end":"2021-09-15T20:33:04.204Z","steps":["trace[1083793347] 'process raft request'  (duration: 178.835ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T20:33:04.204Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.9084ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T20:33:04.204Z","caller":"traceutil/trace.go:171","msg":"trace[705182327] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:418; }","duration":"106.9706ms","start":"2021-09-15T20:33:04.097Z","end":"2021-09-15T20:33:04.204Z","steps":["trace[705182327] 'agreement among raft nodes before linearized reading'  (duration: 106.889ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T20:33:04.223Z","caller":"traceutil/trace.go:171","msg":"trace[1134737840] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"118.7785ms","start":"2021-09-15T20:33:04.105Z","end":"2021-09-15T20:33:04.223Z","steps":["trace[1134737840] 'process raft request'  (duration: 72.1989ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:33:39 up  2:08,  0 users,  load average: 53.85, 32.22, 21.83
	Linux cert-options-20210915202501-22848 4.19.121-linuxkit #1 SMP Thu Jan 21 15:36:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [13e85a8e932d] <==
	* I0915 20:32:08.788453       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0915 20:32:08.788486       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0915 20:32:08.927305       1 trace.go:205] Trace[291207942]: "Create" url:/api/v1/namespaces,user-agent:kube-apiserver/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:588cb227-a7bb-4845-b94c-cbc54b61b05f,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 20:32:08.262) (total time: 664ms):
	Trace[291207942]: ---"Object stored in database" 663ms (20:32:08.926)
	Trace[291207942]: [664.9982ms] [664.9982ms] END
	I0915 20:32:19.176802       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 20:32:24.333579       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 20:32:25.796679       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0915 20:32:26.929409       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0915 20:32:26.933665       1 controller.go:611] quota admission added evaluator for: endpoints
	I0915 20:32:26.975551       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 20:32:40.711748       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0915 20:32:41.990499       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0915 20:32:43.189418       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0915 20:33:03.479634       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0915 20:33:03.845587       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0915 20:33:21.809205       1 trace.go:205] Trace[210199679]: "Patch" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-cert-options-20210915202501-22848/status,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:c5e4de68-11e1-452c-a8af-36476d42218c,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 20:33:21.249) (total time: 559ms):
	Trace[210199679]: ---"Recorded the audit event" 313ms (20:33:21.562)
	Trace[210199679]: ---"About to check admission control" 175ms (20:33:21.738)
	Trace[210199679]: [559.3449ms] [559.3449ms] END
	I0915 20:33:25.712010       1 trace.go:205] Trace[1773007356]: "Patch" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-cert-options-20210915202501-22848/status,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:981f3afb-9733-407f-b50a-13df940127d0,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 20:33:25.182) (total time: 529ms):
	Trace[1773007356]: ---"Recorded the audit event" 270ms (20:33:25.452)
	Trace[1773007356]: ---"About to check admission control" 125ms (20:33:25.583)
	Trace[1773007356]: ---"Object stored in database" 127ms (20:33:25.711)
	Trace[1773007356]: [529.9061ms] [529.9061ms] END
	
	* 
	* ==> kube-controller-manager [181000836a33] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xbe
	crypto/tls.(*Conn).readFromUntil(0xc00093d180, 0x5176ac0, 0xc000450228, 0x5, 0xc000450228, 0x400)
		/usr/local/go/src/crypto/tls/conn.go:798 +0xf3
	crypto/tls.(*Conn).readRecordOrCCS(0xc00093d180, 0x0, 0x0, 0x1)
		/usr/local/go/src/crypto/tls/conn.go:605 +0x115
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:573
	crypto/tls.(*Conn).Read(0xc00093d180, 0xc000f08000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:1276 +0x165
	bufio.(*Reader).Read(0xc0003c8c00, 0xc000e263b8, 0x9, 0x9, 0x99f88b, 0xc0009f1c78, 0x4071a5)
		/usr/local/go/src/bufio/bufio.go:227 +0x222
	io.ReadAtLeast(0x516f400, 0xc0003c8c00, 0xc000e263b8, 0x9, 0x9, 0x9, 0xc000e40120, 0xf13f81a3867d00, 0xc000e40120)
		/usr/local/go/src/io/io.go:328 +0x87
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000e263b8, 0x9, 0x9, 0x516f400, 0xc0003c8c00, 0x0, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000e26380, 0xc000e03950, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0009f1fa8, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1821 +0xd8
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0005e9e00)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1743 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:695 +0x6c5
	
	* 
	* ==> kube-controller-manager [34599660af1b] <==
	* I0915 20:33:02.758139       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0915 20:33:02.758213       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0915 20:33:02.773851       1 shared_informer.go:247] Caches are synced for attach detach 
	I0915 20:33:02.786710       1 shared_informer.go:247] Caches are synced for taint 
	I0915 20:33:02.786901       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0915 20:33:02.787410       1 node_lifecycle_controller.go:1013] Missing timestamp for Node cert-options-20210915202501-22848. Assuming now as a timestamp.
	I0915 20:33:02.787511       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0915 20:33:02.807410       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0915 20:33:02.813777       1 event.go:291] "Event occurred" object="cert-options-20210915202501-22848" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node cert-options-20210915202501-22848 event: Registered Node cert-options-20210915202501-22848 in Controller"
	I0915 20:33:02.841745       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0915 20:33:02.906313       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0915 20:33:02.991540       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-cert-options-20210915202501-22848" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0915 20:33:03.048322       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 20:33:03.074910       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0915 20:33:03.089497       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0915 20:33:03.089544       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 20:33:03.143896       1 shared_informer.go:247] Caches are synced for stateful set 
	I0915 20:33:03.147482       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-cert-options-20210915202501-22848" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0915 20:33:03.484840       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 20:33:03.486391       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0915 20:33:03.491811       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 20:33:03.709233       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 1"
	I0915 20:33:04.350204       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-h2n6w"
	I0915 20:33:04.438892       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nbv99"
	I0915 20:33:12.830062       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [b83643181d2c] <==
	* I0915 20:33:24.680285       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0915 20:33:24.680422       1 server_others.go:140] Detected node IP 192.168.58.2
	W0915 20:33:24.680618       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 20:33:27.257231       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 20:33:27.257298       1 server_others.go:212] Using iptables Proxier.
	I0915 20:33:27.257329       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 20:33:27.257395       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 20:33:27.261574       1 server.go:649] Version: v1.22.1
	I0915 20:33:27.380243       1 config.go:315] Starting service config controller
	I0915 20:33:27.380316       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0915 20:33:27.381861       1 config.go:224] Starting endpoint slice config controller
	I0915 20:33:27.381874       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0915 20:33:27.500484       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0915 20:33:27.580558       1 shared_informer.go:247] Caches are synced for service config 
	E0915 20:33:27.620722       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cert-options-20210915202501-22848.16a51912886086cc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048b405d6a325d0, ext:6698032801, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-cert-options-20210915202501-22848", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:""
, Name:"cert-options-20210915202501-22848", UID:"cert-options-20210915202501-22848", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "cert-options-20210915202501-22848.16a51912886086cc" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	
	* 
	* ==> kube-scheduler [0eed1b850b34] <==
	* E0915 20:32:12.400381       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 20:32:12.400557       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 20:32:12.408149       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 20:32:12.651344       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 20:32:12.761487       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 20:32:12.761604       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 20:32:12.791287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 20:32:12.900036       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 20:32:14.762822       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 20:32:15.477495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 20:32:15.611322       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 20:32:15.818498       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 20:32:16.038509       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 20:32:16.251339       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 20:32:16.462989       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 20:32:16.681481       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 20:32:16.762519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 20:32:17.738067       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 20:32:17.746448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 20:32:17.989675       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 20:32:18.462374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 20:32:18.483354       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 20:32:18.687802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 20:32:25.044732       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0915 20:32:41.464291       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-09-15 20:25:36 UTC, end at Wed 2021-09-15 20:33:42 UTC. --
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:10.443809    2787 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02e9083e-8922-473c-aec2-457f2fd5c0b4-config-volume\") pod \"coredns-78fcd69978-h2n6w\" (UID: \"02e9083e-8922-473c-aec2-457f2fd5c0b4\") "
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:10.476315    2787 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/2bbcd2ea99180f436b9b9aa2216bd5af-etcd-data\") pod \"etcd-cert-options-20210915202501-22848\" (UID: \"2bbcd2ea99180f436b9b9aa2216bd5af\") "
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:10.489264    2787 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f772fda85c5c8f6e268392341d1d775-ca-certs\") pod \"kube-apiserver-cert-options-20210915202501-22848\" (UID: \"7f772fda85c5c8f6e268392341d1d775\") "
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:10.493218    2787 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d7803f8-6632-4e3b-93de-68ef8872041c-lib-modules\") pod \"kube-proxy-nbv99\" (UID: \"0d7803f8-6632-4e3b-93de-68ef8872041c\") "
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:10.493833    2787 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldjdp\" (UniqueName: \"kubernetes.io/projected/02e9083e-8922-473c-aec2-457f2fd5c0b4-kube-api-access-ldjdp\") pod \"coredns-78fcd69978-h2n6w\" (UID: \"02e9083e-8922-473c-aec2-457f2fd5c0b4\") "
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:10.495323    2787 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c3263dcf16f55cf01b2d033292a46fa3-flexvolume-dir\") pod \"kube-controller-manager-cert-options-20210915202501-22848\" (UID: \"c3263dcf16f55cf01b2d033292a46fa3\") "
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:10.502834    2787 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/2bbcd2ea99180f436b9b9aa2216bd5af-etcd-certs\") pod \"etcd-cert-options-20210915202501-22848\" (UID: \"2bbcd2ea99180f436b9b9aa2216bd5af\") "
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:10.505862    2787 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d7803f8-6632-4e3b-93de-68ef8872041c-xtables-lock\") pod \"kube-proxy-nbv99\" (UID: \"0d7803f8-6632-4e3b-93de-68ef8872041c\") "
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:10.506286    2787 reconciler.go:157] "Reconciler: start to sync state"
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: E0915 20:33:10.555179    2787 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-cert-options-20210915202501-22848\" already exists" pod="kube-system/kube-controller-manager-cert-options-20210915202501-22848"
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: E0915 20:33:10.560886    2787 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-apiserver-cert-options-20210915202501-22848\" already exists" pod="kube-system/kube-apiserver-cert-options-20210915202501-22848"
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: E0915 20:33:10.559580    2787 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"kube-scheduler-cert-options-20210915202501-22848\" already exists" pod="kube-system/kube-scheduler-cert-options-20210915202501-22848"
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: W0915 20:33:10.594061    2787 container.go:586] Failed to update stats for container "/kubepods/besteffort/pod0d7803f8-6632-4e3b-93de-68ef8872041c": /sys/fs/cgroup/cpuset/kubepods/besteffort/pod0d7803f8-6632-4e3b-93de-68ef8872041c/cpuset.mems found to be empty, continuing to push stats
	Sep 15 20:33:10 cert-options-20210915202501-22848 kubelet[2787]: W0915 20:33:10.941631    2787 container.go:586] Failed to update stats for container "/kubepods/burstable/pod02e9083e-8922-473c-aec2-457f2fd5c0b4": /sys/fs/cgroup/cpuset/kubepods/burstable/pod02e9083e-8922-473c-aec2-457f2fd5c0b4/cpuset.mems found to be empty, continuing to push stats
	Sep 15 20:33:11 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:11.690811    2787 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 20:33:11 cert-options-20210915202501-22848 kubelet[2787]: W0915 20:33:11.866463    2787 container.go:586] Failed to update stats for container "/kubepods/besteffort/podf9adb350-56b4-4d29-917b-36a69681ba74": /sys/fs/cgroup/cpuset/kubepods/besteffort/podf9adb350-56b4-4d29-917b-36a69681ba74/cpuset.mems found to be empty, continuing to push stats
	Sep 15 20:33:11 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:11.931454    2787 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f9adb350-56b4-4d29-917b-36a69681ba74-tmp\") pod \"storage-provisioner\" (UID: \"f9adb350-56b4-4d29-917b-36a69681ba74\") "
	Sep 15 20:33:11 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:11.931846    2787 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbqfk\" (UniqueName: \"kubernetes.io/projected/f9adb350-56b4-4d29-917b-36a69681ba74-kube-api-access-qbqfk\") pod \"storage-provisioner\" (UID: \"f9adb350-56b4-4d29-917b-36a69681ba74\") "
	Sep 15 20:33:13 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:13.960484    2787 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ac541741b63c1ae778907058d057c27f130323d3e9f3018ffd231945c8c1587b"
	Sep 15 20:33:24 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:24.625654    2787 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="57dcd5b65d006760d9b858bff60219d2df70e2f79fc202d8e8191f8eff548430"
	Sep 15 20:33:25 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:25.045890    2787 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-h2n6w through plugin: invalid network status for"
	Sep 15 20:33:26 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:26.925523    2787 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="21c06b15319bc1a4fd028800a2b795f7bd10ee1543eee2398c406d16a02b891e"
	Sep 15 20:33:28 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:28.559912    2787 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-h2n6w through plugin: invalid network status for"
	Sep 15 20:33:28 cert-options-20210915202501-22848 kubelet[2787]: E0915 20:33:28.894458    2787 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod02e9083e-8922-473c-aec2-457f2fd5c0b4\": RecentStats: unable to find data in memory cache]"
	Sep 15 20:33:31 cert-options-20210915202501-22848 kubelet[2787]: I0915 20:33:31.377118    2787 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-h2n6w through plugin: invalid network status for"
	
	* 
	* ==> storage-provisioner [d47d0abd865e] <==
	* I0915 20:33:28.477362       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect cert-options-20210915202501-22848 --format={{.State.Status}}" took an unusually long time: 2.4419446s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-options-20210915202501-22848 -n cert-options-20210915202501-22848
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-options-20210915202501-22848 -n cert-options-20210915202501-22848: (7.284061s)
helpers_test.go:262: (dbg) Run:  kubectl --context cert-options-20210915202501-22848 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestCertOptions]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context cert-options-20210915202501-22848 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context cert-options-20210915202501-22848 describe pod : exit status 1 (322.1465ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context cert-options-20210915202501-22848 describe pod : exit status 1
helpers_test.go:176: Cleaning up "cert-options-20210915202501-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20210915202501-22848

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20210915202501-22848: (29.1293923s)
--- FAIL: TestCertOptions (558.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (9.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:1039: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:1039: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (4.2411023s)

                                                
                                                
-- stdout --
	* [functional-20210915185528-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 19:03:51.719135   91240 out.go:298] Setting OutFile to fd 1704 ...
	I0915 19:03:51.721111   91240 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:03:51.721111   91240 out.go:311] Setting ErrFile to fd 2032...
	I0915 19:03:51.721111   91240 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:03:51.740140   91240 out.go:305] Setting JSON to false
	I0915 19:03:51.744137   91240 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9152105,"bootTime":1622580526,"procs":159,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 19:03:51.744137   91240 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 19:03:51.748127   91240 out.go:177] * [functional-20210915185528-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 19:03:51.751134   91240 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 19:03:51.754144   91240 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 19:03:51.763644   91240 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 19:03:51.765144   91240 config.go:177] Loaded profile config "functional-20210915185528-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:03:51.766127   91240 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 19:03:54.169034   91240 docker.go:132] docker version: linux-20.10.5
	I0915 19:03:54.180033   91240 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 19:03:55.333392   91240 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1533665s)
	I0915 19:03:55.335205   91240 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2021-09-15 19:03:54.8005289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 19:03:55.339545   91240 out.go:177] * Using the docker driver based on existing profile
	I0915 19:03:55.339779   91240 start.go:278] selected driver: docker
	I0915 19:03:55.339779   91240 start.go:751] validating driver "docker" against &{Name:functional-20210915185528-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-prov
isioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 19:03:55.339779   91240 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 19:03:55.462463   91240 out.go:177] 
	W0915 19:03:55.462917   91240 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 19:03:55.465862   91240 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:1054: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:1054: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --dry-run --alsologtostderr -v=1 --driver=docker: exit status 1 (5.0436207s)

                                                
                                                
-- stdout --
	* [functional-20210915185528-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	* Using the docker driver based on existing profile

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 19:03:55.952261   88864 out.go:298] Setting OutFile to fd 1852 ...
	I0915 19:03:55.954286   88864 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:03:55.954286   88864 out.go:311] Setting ErrFile to fd 1748...
	I0915 19:03:55.954286   88864 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:03:55.979766   88864 out.go:305] Setting JSON to false
	I0915 19:03:55.986264   88864 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9152109,"bootTime":1622580526,"procs":158,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 19:03:55.986264   88864 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 19:03:55.990273   88864 out.go:177] * [functional-20210915185528-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 19:03:55.992267   88864 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 19:03:55.994313   88864 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 19:03:55.996265   88864 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 19:03:55.998282   88864 config.go:177] Loaded profile config "functional-20210915185528-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:03:55.999295   88864 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 19:03:58.231854   88864 docker.go:132] docker version: linux-20.10.5
	I0915 19:03:58.243806   88864 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 19:03:59.496759   88864 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.252961s)
	I0915 19:03:59.498571   88864 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2021-09-15 19:03:58.8943212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 19:03:59.502879   88864 out.go:177] * Using the docker driver based on existing profile
	I0915 19:03:59.503184   88864 start.go:278] selected driver: docker
	I0915 19:03:59.503184   88864 start.go:751] validating driver "docker" against &{Name:functional-20210915185528-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-prov
isioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 19:03:59.503543   88864 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 19:03:59.564811   88864 cli_runner.go:115] Run: docker system info --format "{{json .}}"

                                                
                                                
** /stderr **
functional_test.go:1059: dry-run exit code = 1, wanted = 0: exit status 1
--- FAIL: TestFunctional/parallel/DryRun (9.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (54.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:929: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:929: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status: (6.3411822s)
functional_test.go:935: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:935: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (7.2551073s)
functional_test.go:946: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:946: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status -o json: (6.6961714s)
functional_test.go:953: failed to decode json from minikube status. args "out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status -o json". invalid character '{' after top-level value
functional_test.go:956: "out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status -o json" failed: invalid character '{' after top-level value. Missing key Host in json object
functional_test.go:959: "out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status -o json" failed: invalid character '{' after top-level value. Missing key Kubelet in json object
functional_test.go:962: "out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status -o json" failed: invalid character '{' after top-level value. Missing key APIServer in json object
functional_test.go:965: "out/minikube-windows-amd64.exe -p functional-20210915185528-22848 status -o json" failed: invalid character '{' after top-level value. Missing key Kubeconfig in json object
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20210915185528-22848
helpers_test.go:236: (dbg) docker inspect functional-20210915185528-22848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce",
	        "Created": "2021-09-15T18:55:42.6785746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27207,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T18:55:44.5043231Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce/hostname",
	        "HostsPath": "/var/lib/docker/containers/911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce/hosts",
	        "LogPath": "/var/lib/docker/containers/911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce/911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce-json.log",
	        "Name": "/functional-20210915185528-22848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20210915185528-22848:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20210915185528-22848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7b6d40e371d77fb8dadecd60798ef83c7557fe3841ddf04cd4f88525b795a293-init/diff:/var/lib/docker/overlay2/a259804ff45c264548e9459111f8eb7e789339b3253b50b62afde896e9e19e34/diff:/var/lib/docker/overlay2/61882a81480713e64bf02bef67583a0609b2be0589d08187547a88789584af86/diff:/var/lib/docker/overlay2/a41d1f5e24156c1d438fe25c567f3c3492d15cb77b1bf5545be9086be845138a/diff:/var/lib/docker/overlay2/86e30e10438032d0a02b54850ad0316347488f3d5b831234af1e91f943269850/diff:/var/lib/docker/overlay2/f6962936c0c1b0636454847e8e963a472786602e15a00d5e020827c2372acfce/diff:/var/lib/docker/overlay2/5eee83c6029359aefecbba85cc6d456e3a5a97c3ef6e9f4850e8a53c62b30ef5/diff:/var/lib/docker/overlay2/fdaa4e134ab960962e0a388adaa3a6aa59dd139cc016dfd4cdf4565bc80e8469/diff:/var/lib/docker/overlay2/9e1b9be7e17136fa81b0a224e2fab9704d3234ca119d87c14f9a676bbdb023f5/diff:/var/lib/docker/overlay2/ffe06185e93cb7ae8d48d84ea9be8817f2ae3d2aae85114ce41477579e23debd/diff:/var/lib/docker/overlay2/221713
20a621ffe79c2acb0c13308b1b0cd3bc94a4083992e7b8589b820c625c/diff:/var/lib/docker/overlay2/eb2fb3ccafd6cb1c26a9642601357b3e0563e9e9361a5ab359bf1af592a0d709/diff:/var/lib/docker/overlay2/6081368e802a14f6f6a7424eb7af3f5f29f85bf59ed0a0709ce25b53738095cb/diff:/var/lib/docker/overlay2/fd7176e5912a824a0543fa3ab5170921538a287401ff8a451c90e1ef0fd8adea/diff:/var/lib/docker/overlay2/eec5078968f5e7332ff82191a780be0efef38aef75ea7cd67723ab3d2760c281/diff:/var/lib/docker/overlay2/d18d41a44c04cb695c4b69ac0db0d5807cee4ca8a5a695629f97e2d8d9cf9461/diff:/var/lib/docker/overlay2/b125406c01cea6a83fa5515a19bb6822d1194fcd47eeb1ed541b9304804a54be/diff:/var/lib/docker/overlay2/b49ae7a2c3101c5b094f611e08fb7b68d8688cb3c333066f697aafc1dc7c2c7e/diff:/var/lib/docker/overlay2/ce599106d279966257baab0cc43ed0366d690702b449073e812a47ae6698dedf/diff:/var/lib/docker/overlay2/5f005c2e8ab4cd52b59f5118e6f5e352dd834afde547ba1ee7b71141319e3547/diff:/var/lib/docker/overlay2/2b1f9abca5d32e21fe1da66b2604d858599b74fc9359bd55e050cebccaba5c7d/diff:/var/lib/d
ocker/overlay2/a5f956d0de2a0313dfbaefb921518d8a75267b71a9e7c68207a81682db5394b5/diff:/var/lib/docker/overlay2/e0050af32b9eb0f12404cf384139cd48050d4a969d090faaa07b9f42fe954627/diff:/var/lib/docker/overlay2/f18c15fd90b361f7a13265b5426d985a47e261abde790665028916551b5218f3/diff:/var/lib/docker/overlay2/0f266ad6b65c857206fd10e121b74564370ca213f5706493619b6a590c496660/diff:/var/lib/docker/overlay2/fc044060d3681022984120753b0c02afc05afbb256dbdfc9f7f5e966e1d98820/diff:/var/lib/docker/overlay2/91df5011d1388013be2af7bb3097195366fd38d1f46d472e630aab583779f7c0/diff:/var/lib/docker/overlay2/f810a7fbc880b9ff7c367b14e34088e851fa045d860ce4bf4c49999fcf814a6e/diff:/var/lib/docker/overlay2/318584cae4acc059b81627e00ae703167673c73d234d6e64e894fc3500750f90/diff:/var/lib/docker/overlay2/a2e1d86ffb5aec517fe891619294d506621a002f4c53e8d3103d5d4ce777ebaf/diff:/var/lib/docker/overlay2/12fd1d215a6881aa03a06f2b8a5415b483530db121b120b66940e1e5cd2e1b96/diff:/var/lib/docker/overlay2/28bbbfc0404aecb7d7d79b4c2bfec07cd44260c922a982af523bda70bbd
7be20/diff:/var/lib/docker/overlay2/4dc0077174d58a8904abddfc67a48e6dd082a1eebc72518af19da37b4eff7b2c/diff:/var/lib/docker/overlay2/4d39db844b44258dbb67b16662175b453df7bfd43274abbf1968486539955750/diff:/var/lib/docker/overlay2/ca34d73c6c31358a3eb714a014a5961863e05dee505a1cfca2c8829380ce362b/diff:/var/lib/docker/overlay2/0c0595112799a0b3604c58158946fb3d0657c4198a6a72e12fbe29a74174d3ea/diff:/var/lib/docker/overlay2/5fc43276da56e90293816918613014e7cec7bedc292a062d39d034c95d56351d/diff:/var/lib/docker/overlay2/71a282cb60752128ee370ced1695c67c421341d364956818e5852fd6714a0e64/diff:/var/lib/docker/overlay2/07723c7054e35caae4987fa66d3d1fd44de0d2875612274dde2bf04e8349b0a0/diff:/var/lib/docker/overlay2/0433db88749fb49b0f02cc65b7113c97134270991a8a82bbe7ff4432aae7e502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b6d40e371d77fb8dadecd60798ef83c7557fe3841ddf04cd4f88525b795a293/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b6d40e371d77fb8dadecd60798ef83c7557fe3841ddf04cd4f88525b795a293/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b6d40e371d77fb8dadecd60798ef83c7557fe3841ddf04cd4f88525b795a293/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20210915185528-22848",
	                "Source": "/var/lib/docker/volumes/functional-20210915185528-22848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20210915185528-22848",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20210915185528-22848",
	                "name.minikube.sigs.k8s.io": "functional-20210915185528-22848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f05e6e475cd82f2595580c74ef64163f4cb22eab4619e370e687b6e1ec37349",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55730"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55731"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55732"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55733"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55734"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6f05e6e475cd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20210915185528-22848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "911613e33eed",
	                        "functional-20210915185528-22848"
	                    ],
	                    "NetworkID": "ca8cf024d05080a281a892a5461499862486e1e4ca113d318b5bc1d53d1bfb31",
	                    "EndpointID": "3676fc7dcc4d6c0a54b926bb37cbd4b7dec1a2d4c32b3f63c6918c551c05e699",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20210915185528-22848 -n functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20210915185528-22848 -n functional-20210915185528-22848: (6.3207354s)
helpers_test.go:245: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 logs -n 25

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 logs -n 25: (18.6346385s)
helpers_test.go:253: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------|---------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                                         Args                                          |             Profile             |          User           | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------------------------------------------|---------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 18:59:48 GMT | Wed, 15 Sep 2021 18:59:52 GMT |
	|         | ssh sudo docker rmi                                                                   |                                 |                         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                               |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 18:59:57 GMT | Wed, 15 Sep 2021 19:00:02 GMT |
	|         | cache reload                                                                          |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:00:03 GMT | Wed, 15 Sep 2021 19:00:07 GMT |
	|         | ssh sudo crictl inspecti                                                              |                                 |                         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                               |                                 |                         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.1                                                           | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:00:07 GMT | Wed, 15 Sep 2021 19:00:07 GMT |
	| cache   | delete k8s.gcr.io/pause:latest                                                        | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:00:08 GMT | Wed, 15 Sep 2021 19:00:08 GMT |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:00:08 GMT | Wed, 15 Sep 2021 19:00:10 GMT |
	|         | kubectl -- --context                                                                  |                                 |                         |         |                               |                               |
	|         | functional-20210915185528-22848                                                       |                                 |                         |         |                               |                               |
	|         | get pods                                                                              |                                 |                         |         |                               |                               |
	| kubectl | --profile=functional-20210915185528-22848                                             | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:00:11 GMT | Wed, 15 Sep 2021 19:00:13 GMT |
	|         | -- --context                                                                          |                                 |                         |         |                               |                               |
	|         | functional-20210915185528-22848 get pods                                              |                                 |                         |         |                               |                               |
	| start   | -p functional-20210915185528-22848                                                    | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:00:13 GMT | Wed, 15 Sep 2021 19:02:13 GMT |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision              |                                 |                         |         |                               |                               |
	|         | --wait=all                                                                            |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:15 GMT | Wed, 15 Sep 2021 19:02:23 GMT |
	|         | logs                                                                                  |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848 logs --file                                           | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:23 GMT | Wed, 15 Sep 2021 19:02:31 GMT |
	|         | C:\Users\jenkins\AppData\Local\Temp\functional-20210915185528-22848182613846\logs.txt |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:32 GMT | Wed, 15 Sep 2021 19:02:32 GMT |
	|         | config unset cpus                                                                     |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:32 GMT | Wed, 15 Sep 2021 19:02:32 GMT |
	|         | config set cpus 2                                                                     |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:33 GMT | Wed, 15 Sep 2021 19:02:33 GMT |
	|         | config get cpus                                                                       |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:34 GMT | Wed, 15 Sep 2021 19:02:34 GMT |
	|         | config unset cpus                                                                     |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:32 GMT | Wed, 15 Sep 2021 19:02:34 GMT |
	|         | addons list                                                                           |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:34 GMT | Wed, 15 Sep 2021 19:02:34 GMT |
	|         | addons list -o json                                                                   |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:32 GMT | Wed, 15 Sep 2021 19:02:37 GMT |
	|         | ssh sudo cat                                                                          |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/22848.pem                                                              |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:37 GMT | Wed, 15 Sep 2021 19:02:43 GMT |
	|         | ssh sudo cat                                                                          |                                 |                         |         |                               |                               |
	|         | /usr/share/ca-certificates/22848.pem                                                  |                                 |                         |         |                               |                               |
	| profile | list --output json                                                                    | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:38 GMT | Wed, 15 Sep 2021 19:02:44 GMT |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:43 GMT | Wed, 15 Sep 2021 19:02:49 GMT |
	|         | ssh sudo cat                                                                          |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/51391683.0                                                             |                                 |                         |         |                               |                               |
	| profile | list                                                                                  | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:45 GMT | Wed, 15 Sep 2021 19:02:51 GMT |
	| profile | list -l                                                                               | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:51 GMT | Wed, 15 Sep 2021 19:02:51 GMT |
	| -p      | functional-20210915185528-22848                                                       | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:50 GMT | Wed, 15 Sep 2021 19:02:55 GMT |
	|         | ssh sudo cat                                                                          |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/228482.pem                                                             |                                 |                         |         |                               |                               |
	| profile | list -o json                                                                          | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:52 GMT | Wed, 15 Sep 2021 19:02:57 GMT |
	| profile | list -o json --light                                                                  | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:58 GMT | Wed, 15 Sep 2021 19:02:58 GMT |
	|---------|---------------------------------------------------------------------------------------|---------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 19:00:13
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 19:00:13.536463    7552 out.go:298] Setting OutFile to fd 1676 ...
	I0915 19:00:13.537449    7552 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:00:13.537449    7552 out.go:311] Setting ErrFile to fd 1660...
	I0915 19:00:13.537449    7552 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:00:13.553464    7552 out.go:305] Setting JSON to false
	I0915 19:00:13.557437    7552 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9151887,"bootTime":1622580526,"procs":154,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 19:00:13.558542    7552 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 19:00:13.563105    7552 out.go:177] * [functional-20210915185528-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 19:00:13.563495    7552 notify.go:169] Checking for updates...
	I0915 19:00:13.565280    7552 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 19:00:13.567646    7552 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 19:00:13.570489    7552 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 19:00:13.572274    7552 config.go:177] Loaded profile config "functional-20210915185528-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:00:13.572667    7552 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 19:00:15.458820    7552 docker.go:132] docker version: linux-20.10.5
	I0915 19:00:15.472365    7552 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 19:00:16.441409    7552 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:54 SystemTime:2021-09-15 19:00:16.0009166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 19:00:16.447981    7552 out.go:177] * Using the docker driver based on existing profile
	I0915 19:00:16.448196    7552 start.go:278] selected driver: docker
	I0915 19:00:16.448196    7552 start.go:751] validating driver "docker" against &{Name:functional-20210915185528-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAdd
onImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 19:00:16.448196    7552 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 19:00:16.474651    7552 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 19:00:17.475896    7552 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0002566s)
	I0915 19:00:17.476321    7552 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:54 SystemTime:2021-09-15 19:00:17.0539245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 19:00:17.555122    7552 start_flags.go:737] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 19:00:17.555296    7552 cni.go:93] Creating CNI manager for ""
	I0915 19:00:17.555296    7552 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 19:00:17.555296    7552 start_flags.go:278] config:
	{Name:functional-20210915185528-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddo
nImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 19:00:17.559265    7552 out.go:177] * Starting control plane node functional-20210915185528-22848 in cluster functional-20210915185528-22848
	I0915 19:00:17.559458    7552 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 19:00:17.563990    7552 out.go:177] * Pulling base image ...
	I0915 19:00:17.564904    7552 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 19:00:17.564904    7552 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 19:00:17.565861    7552 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
	I0915 19:00:17.565861    7552 cache.go:57] Caching tarball of preloaded images
	I0915 19:00:17.566462    7552 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 19:00:17.567659    7552 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.1 on docker
	I0915 19:00:17.567943    7552 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\config.json ...
	I0915 19:00:18.244496    7552 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
	I0915 19:00:18.245061    7552 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
	I0915 19:00:18.245061    7552 cache.go:206] Successfully downloaded all kic artifacts
	I0915 19:00:18.245273    7552 start.go:313] acquiring machines lock for functional-20210915185528-22848: {Name:mkfee538efeff8d31b07f831bd3064dcc53fbc7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 19:00:18.245898    7552 start.go:317] acquired machines lock for "functional-20210915185528-22848" in 382.6µs
	I0915 19:00:18.246105    7552 start.go:93] Skipping create...Using existing machine configuration
	I0915 19:00:18.246105    7552 fix.go:55] fixHost starting: 
	I0915 19:00:18.275222    7552 cli_runner.go:115] Run: docker container inspect functional-20210915185528-22848 --format={{.State.Status}}
	I0915 19:00:18.874750    7552 fix.go:108] recreateIfNeeded on functional-20210915185528-22848: state=Running err=<nil>
	W0915 19:00:18.874750    7552 fix.go:134] unexpected machine state, will restart: <nil>
	I0915 19:00:18.878702    7552 out.go:177] * Updating the running docker "functional-20210915185528-22848" container ...
	I0915 19:00:18.879416    7552 machine.go:88] provisioning docker machine ...
	I0915 19:00:18.879416    7552 ubuntu.go:169] provisioning hostname "functional-20210915185528-22848"
	I0915 19:00:18.893166    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:19.555220    7552 main.go:130] libmachine: Using SSH client type: native
	I0915 19:00:19.555921    7552 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 55730 <nil> <nil>}
	I0915 19:00:19.555921    7552 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20210915185528-22848 && echo "functional-20210915185528-22848" | sudo tee /etc/hostname
	I0915 19:00:19.936181    7552 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20210915185528-22848
	
	I0915 19:00:19.949821    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:20.612389    7552 main.go:130] libmachine: Using SSH client type: native
	I0915 19:00:20.612820    7552 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 55730 <nil> <nil>}
	I0915 19:00:20.612962    7552 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20210915185528-22848' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20210915185528-22848/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20210915185528-22848' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 19:00:20.970994    7552 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 19:00:20.970994    7552 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0915 19:00:20.970994    7552 ubuntu.go:177] setting up certificates
	I0915 19:00:20.971291    7552 provision.go:83] configureAuth start
	I0915 19:00:21.000300    7552 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210915185528-22848
	I0915 19:00:21.622477    7552 provision.go:138] copyHostCerts
	I0915 19:00:21.623254    7552 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0915 19:00:21.623472    7552 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0915 19:00:21.623757    7552 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 19:00:21.625951    7552 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0915 19:00:21.625951    7552 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0915 19:00:21.626248    7552 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 19:00:21.627830    7552 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0915 19:00:21.627830    7552 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0915 19:00:21.628322    7552 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1675 bytes)
	I0915 19:00:21.630091    7552 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-20210915185528-22848 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20210915185528-22848]
	I0915 19:00:21.847786    7552 provision.go:172] copyRemoteCerts
	I0915 19:00:21.859789    7552 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 19:00:21.870571    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:22.517377    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:00:22.722975    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 19:00:22.816822    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0915 19:00:22.902055    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 19:00:22.982537    7552 provision.go:86] duration metric: configureAuth took 2.0112586s
	I0915 19:00:22.982537    7552 ubuntu.go:193] setting minikube options for container-runtime
	I0915 19:00:22.983056    7552 config.go:177] Loaded profile config "functional-20210915185528-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:00:23.002587    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:23.632065    7552 main.go:130] libmachine: Using SSH client type: native
	I0915 19:00:23.632486    7552 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 55730 <nil> <nil>}
	I0915 19:00:23.632656    7552 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 19:00:23.993806    7552 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 19:00:23.993806    7552 ubuntu.go:71] root file system type: overlay
	I0915 19:00:23.994788    7552 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 19:00:24.008709    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:24.634751    7552 main.go:130] libmachine: Using SSH client type: native
	I0915 19:00:24.635019    7552 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 55730 <nil> <nil>}
	I0915 19:00:24.635372    7552 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 19:00:25.003838    7552 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 19:00:25.023561    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:25.626350    7552 main.go:130] libmachine: Using SSH client type: native
	I0915 19:00:25.626350    7552 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 55730 <nil> <nil>}
	I0915 19:00:25.626350    7552 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 19:00:25.989874    7552 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 19:00:25.989874    7552 machine.go:91] provisioned docker machine in 7.1105034s
	I0915 19:00:25.989874    7552 start.go:267] post-start starting for "functional-20210915185528-22848" (driver="docker")
	I0915 19:00:25.989874    7552 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 19:00:26.010030    7552 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 19:00:26.028601    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:26.644968    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:00:26.895208    7552 ssh_runner.go:152] Run: cat /etc/os-release
	I0915 19:00:26.915232    7552 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 19:00:26.915232    7552 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 19:00:26.915232    7552 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 19:00:26.915232    7552 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0915 19:00:26.915232    7552 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0915 19:00:26.916206    7552 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0915 19:00:26.916206    7552 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem -> 228482.pem in /etc/ssl/certs
	I0915 19:00:26.917205    7552 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\test\nested\copy\22848\hosts -> hosts in /etc/test/nested/copy/22848
	I0915 19:00:26.929206    7552 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/22848
	I0915 19:00:26.965891    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem --> /etc/ssl/certs/228482.pem (1708 bytes)
	I0915 19:00:27.040972    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\test\nested\copy\22848\hosts --> /etc/test/nested/copy/22848/hosts (40 bytes)
	I0915 19:00:27.112983    7552 start.go:270] post-start completed in 1.1231159s
	I0915 19:00:27.129755    7552 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 19:00:27.139480    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:27.781594    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:00:28.010129    7552 fix.go:57] fixHost completed within 9.7640872s
	I0915 19:00:28.010129    7552 start.go:80] releasing machines lock for "functional-20210915185528-22848", held for 9.764294s
	I0915 19:00:28.032189    7552 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210915185528-22848
	I0915 19:00:28.651617    7552 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0915 19:00:28.665728    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:28.666585    7552 ssh_runner.go:152] Run: systemctl --version
	I0915 19:00:28.676495    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:29.345614    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:00:29.422816    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:00:29.645778    7552 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I0915 19:00:29.838434    7552 ssh_runner.go:192] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.1865719s)
	I0915 19:00:29.847432    7552 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 19:00:29.908656    7552 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I0915 19:00:29.929638    7552 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I0915 19:00:29.991467    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 19:00:30.074961    7552 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I0915 19:00:30.440840    7552 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I0915 19:00:30.766843    7552 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 19:00:30.823358    7552 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I0915 19:00:31.140281    7552 ssh_runner.go:152] Run: sudo systemctl start docker
	I0915 19:00:31.203105    7552 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 19:00:31.399767    7552 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 19:00:31.574649    7552 out.go:204] * Preparing Kubernetes v1.22.1 on Docker 20.10.8 ...
	I0915 19:00:31.589438    7552 cli_runner.go:115] Run: docker exec -t functional-20210915185528-22848 dig +short host.docker.internal
	I0915 19:00:32.620973    7552 cli_runner.go:168] Completed: docker exec -t functional-20210915185528-22848 dig +short host.docker.internal: (1.0315411s)
	I0915 19:00:32.620973    7552 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0915 19:00:32.645718    7552 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0915 19:00:32.727408    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:33.408286    7552 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0915 19:00:33.408540    7552 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 19:00:33.425022    7552 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 19:00:33.600991    7552 docker.go:558] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20210915185528-22848
	k8s.gcr.io/kube-apiserver:v1.22.1
	k8s.gcr.io/kube-scheduler:v1.22.1
	k8s.gcr.io/kube-proxy:v1.22.1
	k8s.gcr.io/kube-controller-manager:v1.22.1
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/pause:3.3
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0915 19:00:33.600991    7552 docker.go:489] Images already preloaded, skipping extraction
	I0915 19:00:33.610210    7552 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 19:00:33.886915    7552 docker.go:558] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20210915185528-22848
	k8s.gcr.io/kube-apiserver:v1.22.1
	k8s.gcr.io/kube-controller-manager:v1.22.1
	k8s.gcr.io/kube-proxy:v1.22.1
	k8s.gcr.io/kube-scheduler:v1.22.1
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/pause:3.3
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0915 19:00:33.886915    7552 cache_images.go:78] Images are preloaded, skipping loading
	I0915 19:00:33.901609    7552 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
	I0915 19:00:34.304232    7552 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0915 19:00:34.304353    7552 cni.go:93] Creating CNI manager for ""
	I0915 19:00:34.304353    7552 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 19:00:34.304353    7552 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0915 19:00:34.304550    7552 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.22.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20210915185528-22848 NodeName:functional-20210915185528-22848 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0915 19:00:34.305202    7552 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20210915185528-22848"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 19:00:34.305820    7552 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20210915185528-22848 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0915 19:00:34.332407    7552 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.1
	I0915 19:00:34.398521    7552 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 19:00:34.416440    7552 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 19:00:34.453371    7552 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0915 19:00:34.521837    7552 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 19:00:34.587876    7552 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1924 bytes)
	I0915 19:00:34.671732    7552 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 19:00:34.697110    7552 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848 for IP: 192.168.49.2
	I0915 19:00:34.697407    7552 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0915 19:00:34.697773    7552 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0915 19:00:34.698398    7552 certs.go:293] skipping minikube-user signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.key
	I0915 19:00:34.698671    7552 certs.go:293] skipping minikube signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\apiserver.key.dd3b5fb2
	I0915 19:00:34.699219    7552 certs.go:293] skipping aggregator signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\proxy-client.key
	I0915 19:00:34.700978    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22848.pem (1338 bytes)
	W0915 19:00:34.701333    7552 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22848_empty.pem, impossibly tiny 0 bytes
	I0915 19:00:34.701481    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0915 19:00:34.701816    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0915 19:00:34.702043    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0915 19:00:34.702303    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0915 19:00:34.702977    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem (1708 bytes)
	I0915 19:00:34.712218    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0915 19:00:34.811175    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 19:00:34.928177    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 19:00:35.032775    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 19:00:35.113323    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 19:00:35.183330    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0915 19:00:35.270029    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 19:00:35.352451    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 19:00:35.437270    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\22848.pem --> /usr/share/ca-certificates/22848.pem (1338 bytes)
	I0915 19:00:35.527466    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem --> /usr/share/ca-certificates/228482.pem (1708 bytes)
	I0915 19:00:35.605186    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 19:00:35.693505    7552 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 19:00:35.793720    7552 ssh_runner.go:152] Run: openssl version
	I0915 19:00:35.847506    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22848.pem && ln -fs /usr/share/ca-certificates/22848.pem /etc/ssl/certs/22848.pem"
	I0915 19:00:35.908543    7552 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/22848.pem
	I0915 19:00:35.936176    7552 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Sep 15 18:55 /usr/share/ca-certificates/22848.pem
	I0915 19:00:35.961580    7552 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22848.pem
	I0915 19:00:36.025066    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22848.pem /etc/ssl/certs/51391683.0"
	I0915 19:00:36.083877    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228482.pem && ln -fs /usr/share/ca-certificates/228482.pem /etc/ssl/certs/228482.pem"
	I0915 19:00:36.139683    7552 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/228482.pem
	I0915 19:00:36.164246    7552 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Sep 15 18:55 /usr/share/ca-certificates/228482.pem
	I0915 19:00:36.178553    7552 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228482.pem
	I0915 19:00:36.233689    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228482.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 19:00:36.290690    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 19:00:36.354396    7552 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 19:00:36.373414    7552 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 18:34 /usr/share/ca-certificates/minikubeCA.pem
	I0915 19:00:36.390300    7552 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 19:00:36.433757    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 19:00:36.485753    7552 kubeadm.go:390] StartCluster: {Name:functional-20210915185528-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage
-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 19:00:36.511758    7552 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 19:00:36.643633    7552 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 19:00:36.682590    7552 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0915 19:00:36.682590    7552 kubeadm.go:600] restartCluster start
	I0915 19:00:36.697377    7552 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I0915 19:00:36.745944    7552 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 19:00:36.771293    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:37.428475    7552 kubeconfig.go:93] found "functional-20210915185528-22848" server: "https://127.0.0.1:55734"
	I0915 19:00:37.463562    7552 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 19:00:37.511123    7552 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-09-15 18:57:49.316221000 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-09-15 19:00:34.635703000 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0915 19:00:37.511123    7552 kubeadm.go:1032] stopping kube-system containers ...
	I0915 19:00:37.525521    7552 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 19:00:37.665632    7552 docker.go:390] Stopping containers: [a3ab1112f8c0 be181f506c70 6c33ffd51974 0aa327ea6855 5834041e16ab a49d3bd8af1b 1efadde39b35 08710a1f70a2 6cb84dfa0246 2d654b63f5b3 38322ac06c0e 59fe5e6148f1 d8a56d9a35cc 4c237c5570c4]
	I0915 19:00:37.679550    7552 ssh_runner.go:152] Run: docker stop a3ab1112f8c0 be181f506c70 6c33ffd51974 0aa327ea6855 5834041e16ab a49d3bd8af1b 1efadde39b35 08710a1f70a2 6cb84dfa0246 2d654b63f5b3 38322ac06c0e 59fe5e6148f1 d8a56d9a35cc 4c237c5570c4
	I0915 19:00:45.058982    7552 ssh_runner.go:192] Completed: docker stop a3ab1112f8c0 be181f506c70 6c33ffd51974 0aa327ea6855 5834041e16ab a49d3bd8af1b 1efadde39b35 08710a1f70a2 6cb84dfa0246 2d654b63f5b3 38322ac06c0e 59fe5e6148f1 d8a56d9a35cc 4c237c5570c4: (7.3789782s)
	I0915 19:00:45.080360    7552 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I0915 19:00:45.891251    7552 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 19:00:46.056331    7552 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Sep 15 18:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 15 18:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep 15 18:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep 15 18:57 /etc/kubernetes/scheduler.conf
	
	I0915 19:00:46.070693    7552 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0915 19:00:46.187567    7552 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0915 19:00:46.382892    7552 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0915 19:00:46.503248    7552 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 19:00:46.518243    7552 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 19:00:46.603710    7552 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0915 19:00:46.669984    7552 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 19:00:46.680853    7552 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 19:00:46.737395    7552 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 19:00:46.788027    7552 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0915 19:00:46.788027    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:00:47.212049    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:00:49.915677    7552 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.7036449s)
	I0915 19:00:49.915677    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:00:50.479922    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:00:50.759031    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:00:50.991798    7552 api_server.go:50] waiting for apiserver process to appear ...
	I0915 19:00:51.022551    7552 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 19:00:51.092269    7552 api_server.go:70] duration metric: took 100.4716ms to wait for apiserver process to appear ...
	I0915 19:00:51.092269    7552 api_server.go:86] waiting for apiserver healthz status ...
	I0915 19:00:51.092740    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:00:56.096018    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 19:00:56.597717    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:01.600665    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 19:01:02.101086    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:07.102908    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 19:01:07.597196    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:08.758430    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 19:01:08.758430    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 19:01:09.096713    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:09.210461    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:09.210461    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:09.598437    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:09.675399    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:09.675399    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:10.097529    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:10.269864    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:10.269864    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:10.596847    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:10.697126    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:10.697126    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:11.097913    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:11.488027    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:11.597688    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:11.624403    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:12.097690    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:12.114143    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:12.597572    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:12.616329    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:13.098124    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:13.107265    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:13.596637    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:13.617341    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:14.096776    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:14.114375    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:14.597055    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:14.609505    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:15.096971    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:15.104308    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:15.596952    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:15.604690    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:16.096534    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:16.102563    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:16.596762    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:16.604667    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:17.097264    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:17.108386    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:17.596332    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:17.604644    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:18.096524    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:18.109790    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:18.596288    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:18.605108    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:19.096925    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:19.104242    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:19.598206    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:19.607880    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:20.097120    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:20.107005    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:20.598310    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:20.605057    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:21.096803    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:21.104367    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:21.596748    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:21.603998    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:22.097809    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:22.104561    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:22.596737    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:22.604429    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:23.097396    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:23.106109    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:23.596481    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:23.607400    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:24.098847    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:24.115445    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:24.596523    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:29.597692    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 19:01:29.598060    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:32.874687    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:32.875153    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:33.096218    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:33.179763    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:33.179763    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:33.596289    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:33.764419    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:33.764419    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:34.095934    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:34.178744    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:34.178744    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:34.596581    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:34.675732    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:34.676074    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:35.096004    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:35.186695    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:35.186695    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:35.596578    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:35.628009    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 200:
	ok
	I0915 19:01:35.671034    7552 api_server.go:139] control plane version: v1.22.1
	I0915 19:01:35.671034    7552 api_server.go:129] duration metric: took 44.5790507s to wait for apiserver health ...
	I0915 19:01:35.671034    7552 cni.go:93] Creating CNI manager for ""
	I0915 19:01:35.671034    7552 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 19:01:35.671449    7552 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 19:01:35.716456    7552 system_pods.go:59] 7 kube-system pods found
	I0915 19:01:35.716456    7552 system_pods.go:61] "coredns-78fcd69978-kz448" [02c2eb54-ff80-44e8-8801-802e1dc5625f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0915 19:01:35.716456    7552 system_pods.go:61] "etcd-functional-20210915185528-22848" [a5e1a1e8-31bc-44a8-8f69-78a87dd8bfeb] Running
	I0915 19:01:35.716456    7552 system_pods.go:61] "kube-apiserver-functional-20210915185528-22848" [b6c28e24-2401-4fe3-ab11-91ea44668cd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0915 19:01:35.716456    7552 system_pods.go:61] "kube-controller-manager-functional-20210915185528-22848" [344a80f2-afc1-414e-8446-e8eeff71ec5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0915 19:01:35.716456    7552 system_pods.go:61] "kube-proxy-75lgx" [f51bd18e-0ce7-4813-ba9e-d2eedb280750] Running
	I0915 19:01:35.716456    7552 system_pods.go:61] "kube-scheduler-functional-20210915185528-22848" [d7d0d620-55ad-4a82-994c-0f03657affe9] Running
	I0915 19:01:35.716456    7552 system_pods.go:61] "storage-provisioner" [0079c81b-04ae-439f-be17-1a1ba8697238] Running
	I0915 19:01:35.716456    7552 system_pods.go:74] duration metric: took 45.008ms to wait for pod list to return data ...
	I0915 19:01:35.716456    7552 node_conditions.go:102] verifying NodePressure condition ...
	I0915 19:01:35.731769    7552 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 19:01:35.731890    7552 node_conditions.go:123] node cpu capacity is 4
	I0915 19:01:35.731890    7552 node_conditions.go:105] duration metric: took 15.434ms to run NodePressure ...
	I0915 19:01:35.732126    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:01:36.439081    7552 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0915 19:01:36.476002    7552 kubeadm.go:746] kubelet initialised
	I0915 19:01:36.476002    7552 kubeadm.go:747] duration metric: took 36.9216ms waiting for restarted kubelet to initialise ...
	I0915 19:01:36.476002    7552 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 19:01:36.519211    7552 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-kz448" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:38.599784    7552 pod_ready.go:102] pod "coredns-78fcd69978-kz448" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:40.097774    7552 pod_ready.go:92] pod "coredns-78fcd69978-kz448" in "kube-system" namespace has status "Ready":"True"
	I0915 19:01:40.097949    7552 pod_ready.go:81] duration metric: took 3.5787608s waiting for pod "coredns-78fcd69978-kz448" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:40.097949    7552 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:40.128033    7552 pod_ready.go:92] pod "etcd-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:01:40.128033    7552 pod_ready.go:81] duration metric: took 30.0837ms waiting for pod "etcd-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:40.128033    7552 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:40.166047    7552 pod_ready.go:92] pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:01:40.166047    7552 pod_ready.go:81] duration metric: took 38.0148ms waiting for pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:40.166047    7552 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:42.324900    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:44.759614    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:47.250491    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:49.253792    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:51.260700    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:53.789932    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:56.269062    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:58.751404    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:02:00.754716    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:02:02.762317    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:02:05.264452    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:02:07.302728    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:02:09.755338    7552 pod_ready.go:92] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:09.755338    7552 pod_ready.go:81] duration metric: took 29.5894804s waiting for pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:09.755338    7552 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-75lgx" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:09.803007    7552 pod_ready.go:92] pod "kube-proxy-75lgx" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:09.803007    7552 pod_ready.go:81] duration metric: took 47.6688ms waiting for pod "kube-proxy-75lgx" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:09.803007    7552 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:09.838680    7552 pod_ready.go:92] pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:09.838680    7552 pod_ready.go:81] duration metric: took 35.6733ms waiting for pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:09.838680    7552 pod_ready.go:38] duration metric: took 33.3628915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 19:02:09.838853    7552 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 19:02:09.926642    7552 ops.go:34] apiserver oom_adj: -16
	I0915 19:02:09.926642    7552 kubeadm.go:604] restartCluster took 1m33.2446488s
	I0915 19:02:09.926642    7552 kubeadm.go:392] StartCluster complete in 1m33.4414873s
	I0915 19:02:09.927036    7552 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 19:02:09.927652    7552 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 19:02:09.929848    7552 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 19:02:10.017209    7552 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20210915185528-22848" rescaled to 1
	I0915 19:02:10.017209    7552 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
	I0915 19:02:10.022625    7552 out.go:177] * Verifying Kubernetes components...
	I0915 19:02:10.017209    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 19:02:10.018293    7552 config.go:177] Loaded profile config "functional-20210915185528-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:02:10.018293    7552 addons.go:404] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0915 19:02:10.023208    7552 addons.go:65] Setting storage-provisioner=true in profile "functional-20210915185528-22848"
	I0915 19:02:10.023208    7552 addons.go:65] Setting default-storageclass=true in profile "functional-20210915185528-22848"
	I0915 19:02:10.023208    7552 addons.go:153] Setting addon storage-provisioner=true in "functional-20210915185528-22848"
	W0915 19:02:10.023208    7552 addons.go:165] addon storage-provisioner should already be in state true
	I0915 19:02:10.023208    7552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20210915185528-22848"
	I0915 19:02:10.023208    7552 host.go:66] Checking if "functional-20210915185528-22848" exists ...
	I0915 19:02:10.073047    7552 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 19:02:10.076100    7552 cli_runner.go:115] Run: docker container inspect functional-20210915185528-22848 --format={{.State.Status}}
	I0915 19:02:10.076100    7552 cli_runner.go:115] Run: docker container inspect functional-20210915185528-22848 --format={{.State.Status}}
	I0915 19:02:10.349993    7552 start.go:709] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0915 19:02:10.374608    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:02:10.972808    7552 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 19:02:10.973814    7552 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 19:02:10.973814    7552 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 19:02:10.988802    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:02:11.050791    7552 addons.go:153] Setting addon default-storageclass=true in "functional-20210915185528-22848"
	W0915 19:02:11.050791    7552 addons.go:165] addon default-storageclass should already be in state true
	I0915 19:02:11.050916    7552 host.go:66] Checking if "functional-20210915185528-22848" exists ...
	I0915 19:02:11.081681    7552 cli_runner.go:115] Run: docker container inspect functional-20210915185528-22848 --format={{.State.Status}}
	I0915 19:02:11.213121    7552 node_ready.go:35] waiting up to 6m0s for node "functional-20210915185528-22848" to be "Ready" ...
	I0915 19:02:11.237299    7552 node_ready.go:49] node "functional-20210915185528-22848" has status "Ready":"True"
	I0915 19:02:11.237436    7552 node_ready.go:38] duration metric: took 24.3158ms waiting for node "functional-20210915185528-22848" to be "Ready" ...
	I0915 19:02:11.237436    7552 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 19:02:11.275418    7552 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-kz448" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.321585    7552 pod_ready.go:92] pod "coredns-78fcd69978-kz448" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:11.321585    7552 pod_ready.go:81] duration metric: took 46.1677ms waiting for pod "coredns-78fcd69978-kz448" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.321585    7552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.345327    7552 pod_ready.go:92] pod "etcd-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:11.345327    7552 pod_ready.go:81] duration metric: took 23.7413ms waiting for pod "etcd-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.345327    7552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.410641    7552 pod_ready.go:92] pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:11.410641    7552 pod_ready.go:81] duration metric: took 65.3152ms waiting for pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.410852    7552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.490043    7552 pod_ready.go:92] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:11.490043    7552 pod_ready.go:81] duration metric: took 79.1922ms waiting for pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.490043    7552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-75lgx" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.633371    7552 pod_ready.go:92] pod "kube-proxy-75lgx" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:11.633371    7552 pod_ready.go:81] duration metric: took 143.3286ms waiting for pod "kube-proxy-75lgx" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.633504    7552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.783774    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:02:11.875640    7552 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 19:02:11.875640    7552 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 19:02:11.885521    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:02:12.029533    7552 pod_ready.go:92] pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:12.029533    7552 pod_ready.go:81] duration metric: took 396.031ms waiting for pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:12.029533    7552 pod_ready.go:38] duration metric: took 792.1013ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 19:02:12.029533    7552 api_server.go:50] waiting for apiserver process to appear ...
	I0915 19:02:12.042590    7552 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 19:02:12.129789    7552 api_server.go:70] duration metric: took 2.1125938s to wait for apiserver process to appear ...
	I0915 19:02:12.129789    7552 api_server.go:86] waiting for apiserver healthz status ...
	I0915 19:02:12.129789    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:02:12.135093    7552 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 19:02:12.200167    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 200:
	ok
	I0915 19:02:12.210634    7552 api_server.go:139] control plane version: v1.22.1
	I0915 19:02:12.210634    7552 api_server.go:129] duration metric: took 80.8453ms to wait for apiserver health ...
	I0915 19:02:12.210634    7552 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 19:02:12.260972    7552 system_pods.go:59] 7 kube-system pods found
	I0915 19:02:12.261067    7552 system_pods.go:61] "coredns-78fcd69978-kz448" [02c2eb54-ff80-44e8-8801-802e1dc5625f] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "etcd-functional-20210915185528-22848" [a5e1a1e8-31bc-44a8-8f69-78a87dd8bfeb] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "kube-apiserver-functional-20210915185528-22848" [b6c28e24-2401-4fe3-ab11-91ea44668cd1] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "kube-controller-manager-functional-20210915185528-22848" [344a80f2-afc1-414e-8446-e8eeff71ec5c] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "kube-proxy-75lgx" [f51bd18e-0ce7-4813-ba9e-d2eedb280750] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "kube-scheduler-functional-20210915185528-22848" [d7d0d620-55ad-4a82-994c-0f03657affe9] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "storage-provisioner" [0079c81b-04ae-439f-be17-1a1ba8697238] Running
	I0915 19:02:12.261067    7552 system_pods.go:74] duration metric: took 50.4335ms to wait for pod list to return data ...
	I0915 19:02:12.261067    7552 default_sa.go:34] waiting for default service account to be created ...
	I0915 19:02:12.440471    7552 default_sa.go:45] found service account: "default"
	I0915 19:02:12.440471    7552 default_sa.go:55] duration metric: took 179.4055ms for default service account to be created ...
	I0915 19:02:12.440471    7552 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 19:02:12.640992    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:02:12.915710    7552 system_pods.go:86] 7 kube-system pods found
	I0915 19:02:12.915710    7552 system_pods.go:89] "coredns-78fcd69978-kz448" [02c2eb54-ff80-44e8-8801-802e1dc5625f] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "etcd-functional-20210915185528-22848" [a5e1a1e8-31bc-44a8-8f69-78a87dd8bfeb] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "kube-apiserver-functional-20210915185528-22848" [b6c28e24-2401-4fe3-ab11-91ea44668cd1] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "kube-controller-manager-functional-20210915185528-22848" [344a80f2-afc1-414e-8446-e8eeff71ec5c] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "kube-proxy-75lgx" [f51bd18e-0ce7-4813-ba9e-d2eedb280750] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "kube-scheduler-functional-20210915185528-22848" [d7d0d620-55ad-4a82-994c-0f03657affe9] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "storage-provisioner" [0079c81b-04ae-439f-be17-1a1ba8697238] Running
	I0915 19:02:12.915710    7552 system_pods.go:126] duration metric: took 475.2417ms to wait for k8s-apps to be running ...
	I0915 19:02:12.915710    7552 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 19:02:12.927708    7552 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 19:02:12.940850    7552 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 19:02:13.586743    7552 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4516597s)
	I0915 19:02:13.587152    7552 system_svc.go:56] duration metric: took 671.4467ms WaitForService to wait for kubelet.
	I0915 19:02:13.587152    7552 kubeadm.go:547] duration metric: took 3.5699665s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0915 19:02:13.587152    7552 node_conditions.go:102] verifying NodePressure condition ...
	I0915 19:02:13.600877    7552 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 19:02:13.600877    7552 node_conditions.go:123] node cpu capacity is 4
	I0915 19:02:13.600999    7552 node_conditions.go:105] duration metric: took 13.8464ms to run NodePressure ...
	I0915 19:02:13.600999    7552 start.go:231] waiting for startup goroutines ...
	I0915 19:02:13.764631    7552 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0915 19:02:13.765075    7552 addons.go:406] enableAddons completed in 3.7478904s
	I0915 19:02:13.968022    7552 start.go:462] kubectl: 1.20.0, cluster: 1.22.1 (minor skew: 2)
	I0915 19:02:13.970948    7552 out.go:177] 
	W0915 19:02:13.970948    7552 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1.
	I0915 19:02:13.974274    7552 out.go:177]   - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
	I0915 19:02:13.976510    7552 out.go:177] * Done! kubectl is now configured to use "functional-20210915185528-22848" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-09-15 18:55:46 UTC, end at Wed 2021-09-15 19:03:08 UTC. --
	Sep 15 19:00:39 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:39.673074600Z" level=info msg="ignoring event" container=59fe5e6148f128c8126d8fb52a05f55ae2b37e728cd26ecc0c4ff3e1292b24ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:39 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:39.753020200Z" level=info msg="ignoring event" container=a3ab1112f8c0f323e489f958a7b0d1e702fbc8c174150a2d9cf406fc73e940a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.049864000Z" level=info msg="ignoring event" container=5834041e16abd18f408b1ab85db4e1c23e54fda10acbab2d307ac5f00f659f32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.062282800Z" level=info msg="ignoring event" container=d8a56d9a35cca3a0b472ffc534cac42f7c7df77382a63c9a179e91cb01594563 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.075009400Z" level=info msg="ignoring event" container=0aa327ea6855b5b3f619f974024c63cf84a2a8ae192ac517c4ed31ec4527a7e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.162029100Z" level=info msg="ignoring event" container=4c237c5570c4f7191769a02bfb3ef30fd145a7fc3a7e8525125b337fbb8978e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.194706500Z" level=info msg="ignoring event" container=2d654b63f5b361593ef8fa5faebcfb2459725712489a515cc0eb1f4575744f7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.251922500Z" level=info msg="ignoring event" container=a49d3bd8af1ba1d317e3b4eb6709f2db4f6d688ca60b8fe7abc31d2a544ef508 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.251989500Z" level=info msg="ignoring event" container=be181f506c70f38c1042fe26447d8e2a1beb6fd0283ff0e0af43a6f612b001d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.277685000Z" level=info msg="ignoring event" container=1efadde39b35091fd4fac9747511fc59cb2e648594d8c1220bffc056d799ffec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:41 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:41.295484600Z" level=info msg="ignoring event" container=08710a1f70a231dc86b7c004de084d7961598acc5dbcca4e2a7b54faefaabad7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:43 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:43.273852500Z" level=info msg="ignoring event" container=6cb84dfa0246b3ae2d1d2faacc3171bfaf95046a870717df47201f7643e5d447 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:44 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:44.484275400Z" level=info msg="ignoring event" container=6c33ffd51974d25a996b91ead83935203ab440228e760a3aa4c0aec26d6b2c2c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:46 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:46.155724200Z" level=error msg="Handler for GET /v1.41/containers/5ccb2f65220cfc69b1d4149f4d9bdc333429077d5d31b3f94823b56050843cde/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	Sep 15 19:00:46 functional-20210915185528-22848 dockerd[784]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	Sep 15 19:00:46 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:46.469484900Z" level=error msg="Handler for GET /v1.41/containers/017ee5ddc3ff85d70ea27ea66e42a2837eb5d0f9df15d654caf1becc14ceee98/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	Sep 15 19:00:46 functional-20210915185528-22848 dockerd[784]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	Sep 15 19:00:56 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:56.672381300Z" level=info msg="ignoring event" container=5ccb2f65220cfc69b1d4149f4d9bdc333429077d5d31b3f94823b56050843cde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:57 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:57.898431700Z" level=info msg="ignoring event" container=3b151ad17508bd4b4d40972728559d1dc07630166e5bc81fd350ff346fdadf9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:01 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:01.470145500Z" level=info msg="ignoring event" container=e853dc1cb291f60a22b0fd2ca3cc37033199a48f71d9d8407ccb557b582e195d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:09 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:09.864009400Z" level=info msg="ignoring event" container=bb0612c11b265cfaab181f5cb92ed5bc9b8f2ee51f237fc8f64c1b36f1bdab70 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:11 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:11.584011600Z" level=info msg="ignoring event" container=e76ac171af93005fecd29e3f7e2302617292e1eba41e7a424bdcb4db5ef84c69 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:11 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:11.891326400Z" level=info msg="ignoring event" container=2c36a50abcdd71f2a2295a921b432ea0f2748f475b950ceed6376569449fa6e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:13 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:13.577154400Z" level=info msg="ignoring event" container=e8b4bda89d6ccfe8f6f258c7933ae040134aec66ba18da19e85413b6df5bb305 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:32 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:32.963534600Z" level=info msg="ignoring event" container=20d45c87c622b52056f8e55ae3e0ee1a54d32bdf3f2599543f29c344e910a66c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	d1d91286322e2       6e002eb89a881       About a minute ago   Running             kube-controller-manager   3                   3e6483cb8d4fb
	fadd5d0d97af5       6e38f40d628db       About a minute ago   Running             storage-provisioner       3                   7aa1be217e548
	805d322a477a0       f30469a2491a5       About a minute ago   Running             kube-apiserver            2                   a2a07058d2af8
	be25c29f97536       8d147537fb7d1       About a minute ago   Running             coredns                   1                   61be9a2057caf
	20d45c87c622b       6e002eb89a881       About a minute ago   Exited              kube-controller-manager   2                   3e6483cb8d4fb
	e8b4bda89d6cc       6e38f40d628db       About a minute ago   Exited              storage-provisioner       2                   7aa1be217e548
	8d529f0526c5e       36c4ebbc9d979       About a minute ago   Running             kube-proxy                1                   75c8ec4975ff6
	e853dc1cb291f       f30469a2491a5       2 minutes ago        Exited              kube-apiserver            1                   a2a07058d2af8
	2d74646aaaa24       aca5ededae9c8       2 minutes ago        Running             kube-scheduler            1                   017ee5ddc3ff8
	66f42af741b46       0048118155842       2 minutes ago        Running             etcd                      1                   35e564600f444
	6c33ffd51974d       8d147537fb7d1       4 minutes ago        Exited              coredns                   0                   0aa327ea6855b
	5834041e16abd       36c4ebbc9d979       4 minutes ago        Exited              kube-proxy                0                   a49d3bd8af1ba
	6cb84dfa0246b       aca5ededae9c8       5 minutes ago        Exited              kube-scheduler            0                   4c237c5570c4f
	2d654b63f5b36       0048118155842       5 minutes ago        Exited              etcd                      0                   d8a56d9a35cca
	
	* 
	* ==> coredns [6c33ffd51974] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [be25c29f9753] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20210915185528-22848
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20210915185528-22848
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d321606059ead2904f4f5ddd59a9a7026c7ee04
	                    minikube.k8s.io/name=functional-20210915185528-22848
	                    minikube.k8s.io/updated_at=2021_09_15T18_58_23_0700
	                    minikube.k8s.io/version=v1.23.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Sep 2021 18:58:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20210915185528-22848
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 15 Sep 2021 19:03:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Sep 2021 19:01:10 +0000   Wed, 15 Sep 2021 18:58:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Sep 2021 19:01:10 +0000   Wed, 15 Sep 2021 18:58:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Sep 2021 19:01:10 +0000   Wed, 15 Sep 2021 18:58:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 15 Sep 2021 19:01:10 +0000   Wed, 15 Sep 2021 18:58:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20210915185528-22848
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b5e5cdd53d44f5ab575bb522d42acca
	  System UUID:                bfeeb012-cea8-4229-b7d2-e375dd0bea17
	  Boot ID:                    7b7b18db-3e3e-49d3-a2cb-ac38329b7bd9
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.22.1
	  Kube-Proxy Version:         v1.22.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6cbfcd7cbc-6zxnx                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 coredns-78fcd69978-kz448                                   100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m37s
	  kube-system                 etcd-functional-20210915185528-22848                       100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-apiserver-functional-20210915185528-22848             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 kube-controller-manager-functional-20210915185528-22848    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-proxy-75lgx                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-scheduler-functional-20210915185528-22848             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (18%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From     Message
	  ----    ------                   ----                   ----     -------
	  Normal  NodeHasSufficientMemory  5m10s (x8 over 5m12s)  kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s (x8 over 5m12s)  kubelet  Node functional-20210915185528-22848 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s (x7 over 5m12s)  kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m47s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s                  kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s                  kubelet  Node functional-20210915185528-22848 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s                  kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4m46s                  kubelet  Node functional-20210915185528-22848 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m36s                  kubelet  Node functional-20210915185528-22848 status is now: NodeReady
	  Normal  Starting                 2m20s                  kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m18s                  kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m17s (x8 over 2m19s)  kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s (x8 over 2m19s)  kubelet  Node functional-20210915185528-22848 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s (x7 over 2m19s)  kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.000000]  hrtimer_interrupt+0x92/0x165
	[  +0.000000]  hv_stimer0_isr+0x20/0x2d
	[  +0.000000]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000000]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000000]  </IRQ>
	[  +0.000000] RIP: 0010:arch_local_irq_enable+0x7/0x8
	[  +0.000000] Code: ef ff ff 0f 20 d8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 fb 66 0f 1f 44 00 00 <c3> 0f 1f 44 00 00 40 f6 c7 02 74 12 48 b8 ff 0f 00 00 00 00 f0 ff
	[  +0.000000] RSP: 0000:ffffbcaf423f7ee0 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff12
	[  +0.000000] RAX: 0000000080000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000000] RDX: 000055a9735499db RSI: 0000000000000004 RDI: ffffbcaf423f7f58
	[  +0.000000] RBP: ffffbcaf423f7f58 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000004
	[  +0.000000] R13: 000055a9735499db R14: ffff97d483b18dc0 R15: ffff97d4e4dc7400
	[  +0.000000]  __do_page_fault+0x17f/0x42d
	[  +0.000000]  ? page_fault+0x8/0x30
	[  +0.000000]  page_fault+0x1e/0x30
	[  +0.000000] RIP: 0033:0x55a9730c8f03
	[  +0.000000] Code: 0f 6f d9 66 0f ef 0d ec 85 97 00 66 0f ef 15 f4 85 97 00 66 0f ef 1d fc 85 97 00 66 0f 38 dc c9 66 0f 38 dc d2 66 0f 38 dc db <f3> 0f 6f 20 f3 0f 6f 68 10 f3 0f 6f 74 08 e0 f3 0f 6f 7c 08 f0 66
	[  +0.000000] RSP: 002b:000000c00004bdc8 EFLAGS: 00010287
	[  +0.000000] RAX: 000055a9735499db RBX: 000055a9730cb860 RCX: 0000000000000022
	[  +0.000000] RDX: 000000c00004bde0 RSI: 000000c00004be48 RDI: 000000c000080868
	[  +0.000000] RBP: 000000c00004be28 R08: 000055a97353d681 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000004 R11: 000000c0000807d0 R12: 000000000000001a
	[  +0.000000] R13: 0000000000000006 R14: 0000000000000008 R15: 0000000000000017
	[  +0.000000] ---[ end trace cdbbbbc925f6eff0 ]---
	
	* 
	* ==> etcd [2d654b63f5b3] <==
	* {"level":"info","ts":"2021-09-15T18:58:05.580Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20210915185528-22848 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-09-15T18:58:05.581Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T18:58:05.581Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T18:58:05.603Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-09-15T18:58:05.603Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-09-15T18:58:05.604Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T18:58:05.604Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-09-15T18:58:05.654Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T18:58:05.654Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T18:58:05.654Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T18:58:05.780Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2021-09-15T18:58:34.368Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.4976ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T18:58:34.369Z","caller":"traceutil/trace.go:171","msg":"trace[1410007702] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:393; }","duration":"115.4215ms","start":"2021-09-15T18:58:34.253Z","end":"2021-09-15T18:58:34.369Z","steps":["trace[1410007702] 'agreement among raft nodes before linearized reading'  (duration: 42.2033ms)","trace[1410007702] 'range keys from in-memory index tree'  (duration: 71.2858ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T18:58:34.369Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"174.1406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:269"}
	{"level":"info","ts":"2021-09-15T18:58:34.369Z","caller":"traceutil/trace.go:171","msg":"trace[1889322249] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:393; }","duration":"174.1931ms","start":"2021-09-15T18:58:34.195Z","end":"2021-09-15T18:58:34.369Z","steps":["trace[1889322249] 'agreement among raft nodes before linearized reading'  (duration: 100.869ms)","trace[1889322249] 'range keys from in-memory index tree'  (duration: 73.1343ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T18:58:34.370Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"174.6669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" ","response":"range_response_count:1 size:245"}
	{"level":"info","ts":"2021-09-15T18:58:34.370Z","caller":"traceutil/trace.go:171","msg":"trace[1339931177] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:393; }","duration":"174.7143ms","start":"2021-09-15T18:58:34.195Z","end":"2021-09-15T18:58:34.370Z","steps":["trace[1339931177] 'agreement among raft nodes before linearized reading'  (duration: 100.8296ms)","trace[1339931177] 'range keys from in-memory index tree'  (duration: 73.8143ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T19:00:38.368Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2021-09-15T19:00:38.369Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20210915185528-22848","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2021/09/15 19:00:38 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2021/09/15 19:00:38 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2021-09-15T19:00:38.556Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2021-09-15T19:00:38.565Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T19:00:38.567Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T19:00:38.567Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20210915185528-22848","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [66f42af741b4] <==
	* {"level":"info","ts":"2021-09-15T19:00:55.769Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-09-15T19:00:55.781Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-id":"fa54960ea34d58be","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T19:00:55.782Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-09-15T19:00:55.783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-09-15T19:00:55.784Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-09-15T19:00:55.788Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-09-15T19:00:55.801Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-09-15T19:00:55.802Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T19:00:55.802Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T19:00:55.803Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-09-15T19:00:55.802Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-09-15T19:00:56.385Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20210915185528-22848 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-09-15T19:00:56.385Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T19:00:56.387Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T19:00:56.391Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-09-15T19:00:56.391Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-09-15T19:00:56.413Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-09-15T19:00:56.455Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:03:13 up 38 min,  0 users,  load average: 1.73, 2.95, 4.65
	Linux functional-20210915185528-22848 4.19.121-linuxkit #1 SMP Thu Jan 21 15:36:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [805d322a477a] <==
	* I0915 19:01:32.127710       1 apf_controller.go:299] Starting API Priority and Fairness config controller
	E0915 19:01:32.161487       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0915 19:01:32.661418       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0915 19:01:32.664156       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0915 19:01:32.664242       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0915 19:01:32.673081       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 19:01:32.780860       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0915 19:01:32.780882       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0915 19:01:32.780894       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0915 19:01:32.781088       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0915 19:01:32.781106       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 19:01:32.799557       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0915 19:01:32.801276       1 cache.go:39] Caches are synced for autoregister controller
	I0915 19:01:33.153300       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0915 19:01:33.153542       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0915 19:01:33.165458       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0915 19:01:36.104410       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0915 19:01:36.177303       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0915 19:01:36.312530       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0915 19:01:36.362339       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 19:01:36.381453       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 19:01:50.819894       1 controller.go:611] quota admission added evaluator for: endpoints
	I0915 19:02:07.465764       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 19:02:35.008724       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0915 19:02:35.206593       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-apiserver [e853dc1cb291] <==
	* I0915 19:01:01.174718       1 server.go:553] external host was not specified, using 192.168.49.2
	I0915 19:01:01.177175       1 server.go:161] Version: v1.22.1
	Error: failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use
	
	* 
	* ==> kube-controller-manager [20d45c87c622] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xbe
	crypto/tls.(*Conn).readFromUntil(0xc000259500, 0x5176ac0, 0xc000186b80, 0x5, 0xc000186b80, 0x400)
		/usr/local/go/src/crypto/tls/conn.go:798 +0xf3
	crypto/tls.(*Conn).readRecordOrCCS(0xc000259500, 0x0, 0x0, 0x3)
		/usr/local/go/src/crypto/tls/conn.go:605 +0x115
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:573
	crypto/tls.(*Conn).Read(0xc000259500, 0xc000036000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:1276 +0x165
	bufio.(*Reader).Read(0xc0000d26c0, 0xc000fe43b8, 0x9, 0x9, 0x99f88b, 0xc0008b1c78, 0x4071a5)
		/usr/local/go/src/bufio/bufio.go:227 +0x222
	io.ReadAtLeast(0x516f400, 0xc0000d26c0, 0xc000fe43b8, 0x9, 0x9, 0x9, 0xc000da6010, 0x6591048f045b00, 0xc000da6010)
		/usr/local/go/src/io/io.go:328 +0x87
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000fe43b8, 0x9, 0x9, 0x516f400, 0xc0000d26c0, 0x0, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000fe4380, 0xc000d6c1b0, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0008b1fa8, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1821 +0xd8
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc00009fc80)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1743 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:695 +0x6c5
	
	* 
	* ==> kube-controller-manager [d1d91286322e] <==
	* I0915 19:02:07.388766       1 shared_informer.go:247] Caches are synced for stateful set 
	I0915 19:02:07.392196       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0915 19:02:07.396084       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0915 19:02:07.397658       1 shared_informer.go:247] Caches are synced for attach detach 
	I0915 19:02:07.404260       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0915 19:02:07.405895       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0915 19:02:07.458281       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0915 19:02:07.460651       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0915 19:02:07.461985       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0915 19:02:07.464484       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0915 19:02:07.473637       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0915 19:02:07.474003       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 19:02:07.474155       1 shared_informer.go:247] Caches are synced for expand 
	I0915 19:02:07.475822       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 19:02:07.477352       1 shared_informer.go:247] Caches are synced for disruption 
	I0915 19:02:07.479753       1 disruption.go:371] Sending events to api server.
	I0915 19:02:07.481935       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0915 19:02:07.495929       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0915 19:02:07.496277       1 shared_informer.go:247] Caches are synced for deployment 
	I0915 19:02:07.497268       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0915 19:02:07.879303       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 19:02:07.954938       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 19:02:07.954971       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0915 19:02:35.037113       1 event.go:291] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-6cbfcd7cbc to 1"
	I0915 19:02:35.138748       1 event.go:291] "Event occurred" object="default/hello-node-6cbfcd7cbc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-6cbfcd7cbc-6zxnx"
	
	* 
	* ==> kube-proxy [5834041e16ab] <==
	* I0915 18:58:41.383253       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0915 18:58:41.383395       1 server_others.go:140] Detected node IP 192.168.49.2
	W0915 18:58:41.384238       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 18:58:41.878350       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 18:58:41.878431       1 server_others.go:212] Using iptables Proxier.
	I0915 18:58:41.878882       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 18:58:41.878943       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 18:58:41.887684       1 server.go:649] Version: v1.22.1
	I0915 18:58:41.902017       1 config.go:315] Starting service config controller
	I0915 18:58:41.902056       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0915 18:58:41.902096       1 config.go:224] Starting endpoint slice config controller
	I0915 18:58:41.902108       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0915 18:58:41.997535       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"functional-20210915185528-22848.16a513e6c7771690", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048ae7875bc8204, ext:1624351101, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-functional-20210915185528-22848", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Na
me:"functional-20210915185528-22848", UID:"functional-20210915185528-22848", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "functional-20210915185528-22848.16a513e6c7771690" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 18:58:42.003014       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0915 18:58:42.003116       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [8d529f0526c5] <==
	* E0915 19:01:13.690897       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915185528-22848": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 19:01:14.863688       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915185528-22848": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 19:01:17.181305       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915185528-22848": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 19:01:21.531488       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915185528-22848": dial tcp 192.168.49.2:8441: connect: connection refused
	I0915 19:01:33.199178       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0915 19:01:33.199262       1 server_others.go:140] Detected node IP 192.168.49.2
	W0915 19:01:33.256597       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 19:01:33.995613       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 19:01:33.996242       1 server_others.go:212] Using iptables Proxier.
	I0915 19:01:33.996778       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 19:01:33.997251       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 19:01:33.999227       1 server.go:649] Version: v1.22.1
	I0915 19:01:34.092829       1 config.go:315] Starting service config controller
	I0915 19:01:34.092862       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0915 19:01:34.102089       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"functional-20210915185528-22848.16a5140ede1ef91c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048aea384c9b860, ext:21100838601, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-functional-20210915185528-22848", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", N
ame:"functional-20210915185528-22848", UID:"functional-20210915185528-22848", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "functional-20210915185528-22848.16a5140ede1ef91c" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 19:01:34.103489       1 config.go:224] Starting endpoint slice config controller
	I0915 19:01:34.104349       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0915 19:01:34.104861       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0915 19:01:34.203877       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [2d74646aaaa2] <==
	* I0915 19:00:59.560769       1 serving.go:347] Generated self-signed cert in-memory
	W0915 19:01:08.968839       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 19:01:08.969062       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 19:01:08.969079       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 19:01:08.969090       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 19:01:09.200800       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 19:01:09.201036       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 19:01:09.201040       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0915 19:01:09.200210       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0915 19:01:09.303722       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0915 19:01:32.489360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0915 19:01:32.489479       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0915 19:01:32.489551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0915 19:01:32.489828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0915 19:01:32.489871       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0915 19:01:32.489918       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0915 19:01:32.490169       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	E0915 19:01:32.493289       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0915 19:01:32.493360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0915 19:01:32.493418       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0915 19:01:32.493509       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	E0915 19:01:32.493569       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0915 19:01:32.494127       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	
	* 
	* ==> kube-scheduler [6cb84dfa0246] <==
	* E0915 18:58:16.702606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:16.753258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:16.753347       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 18:58:17.592458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 18:58:17.653589       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 18:58:17.657854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:17.700181       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 18:58:17.752938       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 18:58:17.757490       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 18:58:17.768191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:17.807374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 18:58:17.853899       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:17.855934       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 18:58:17.987959       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 18:58:18.003480       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 18:58:18.270353       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 18:58:18.320249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 18:58:18.371434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:20.085494       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 18:58:20.085598       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 18:58:20.237097       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0915 18:58:20.457296       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0915 19:00:39.054781       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 19:00:39.054953       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0915 19:00:39.055007       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-09-15 18:55:46 UTC, end at Wed 2021-09-15 19:03:16 UTC. --
	Sep 15 19:01:22 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:22.401594    5947 status_manager.go:601] "Failed to get status for pod" podUID=38ad05134b9074b92e81105a83b60d33 pod="kube-system/kube-controller-manager-functional-20210915185528-22848" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20210915185528-22848\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 19:01:23 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:23.394915    5947 scope.go:110] "RemoveContainer" containerID="e853dc1cb291f60a22b0fd2ca3cc37033199a48f71d9d8407ccb557b582e195d"
	Sep 15 19:01:26 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:26.398457    5947 scope.go:110] "RemoveContainer" containerID="e8b4bda89d6ccfe8f6f258c7933ae040134aec66ba18da19e85413b6df5bb305"
	Sep 15 19:01:32 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:32.372839    5947 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Sep 15 19:01:32 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:32.373804    5947 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Sep 15 19:01:32 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:32.376535    5947 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Sep 15 19:01:34 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:34.384531    5947 scope.go:110] "RemoveContainer" containerID="bb0612c11b265cfaab181f5cb92ed5bc9b8f2ee51f237fc8f64c1b36f1bdab70"
	Sep 15 19:01:34 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:34.385242    5947 scope.go:110] "RemoveContainer" containerID="20d45c87c622b52056f8e55ae3e0ee1a54d32bdf3f2599543f29c344e910a66c"
	Sep 15 19:01:34 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:34.386204    5947 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20210915185528-22848_kube-system(38ad05134b9074b92e81105a83b60d33)\"" pod="kube-system/kube-controller-manager-functional-20210915185528-22848" podUID=38ad05134b9074b92e81105a83b60d33
	Sep 15 19:01:39 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:39.260833    5947 scope.go:110] "RemoveContainer" containerID="20d45c87c622b52056f8e55ae3e0ee1a54d32bdf3f2599543f29c344e910a66c"
	Sep 15 19:01:39 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:39.263564    5947 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20210915185528-22848_kube-system(38ad05134b9074b92e81105a83b60d33)\"" pod="kube-system/kube-controller-manager-functional-20210915185528-22848" podUID=38ad05134b9074b92e81105a83b60d33
	Sep 15 19:01:40 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:40.578660    5947 scope.go:110] "RemoveContainer" containerID="20d45c87c622b52056f8e55ae3e0ee1a54d32bdf3f2599543f29c344e910a66c"
	Sep 15 19:01:40 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:40.580903    5947 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20210915185528-22848_kube-system(38ad05134b9074b92e81105a83b60d33)\"" pod="kube-system/kube-controller-manager-functional-20210915185528-22848" podUID=38ad05134b9074b92e81105a83b60d33
	Sep 15 19:01:52 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:52.458719    5947 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/e5714774789c86fd1bc99a5360bb140d97991147233748e0fe32067a9d75a2a4/diff" to get inode usage: stat /var/lib/docker/overlay2/e5714774789c86fd1bc99a5360bb140d97991147233748e0fe32067a9d75a2a4/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/e76ac171af93005fecd29e3f7e2302617292e1eba41e7a424bdcb4db5ef84c69" to get inode usage: stat /var/lib/docker/containers/e76ac171af93005fecd29e3f7e2302617292e1eba41e7a424bdcb4db5ef84c69: no such file or directory
	Sep 15 19:01:52 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:52.508695    5947 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/9494f7ee0cd76da476eb91a91e6c3afb07ef14c1a2ba6d7fd4a4513a88d7b7c7/diff" to get inode usage: stat /var/lib/docker/overlay2/9494f7ee0cd76da476eb91a91e6c3afb07ef14c1a2ba6d7fd4a4513a88d7b7c7/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/bb0612c11b265cfaab181f5cb92ed5bc9b8f2ee51f237fc8f64c1b36f1bdab70" to get inode usage: stat /var/lib/docker/containers/bb0612c11b265cfaab181f5cb92ed5bc9b8f2ee51f237fc8f64c1b36f1bdab70: no such file or directory
	Sep 15 19:01:52 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:52.679108    5947 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/6ef2bb616291753a4a08544efdf73c7d0e6409f734d52a2c8342540516880fac/diff" to get inode usage: stat /var/lib/docker/overlay2/6ef2bb616291753a4a08544efdf73c7d0e6409f734d52a2c8342540516880fac/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/2c36a50abcdd71f2a2295a921b432ea0f2748f475b950ceed6376569449fa6e6" to get inode usage: stat /var/lib/docker/containers/2c36a50abcdd71f2a2295a921b432ea0f2748f475b950ceed6376569449fa6e6: no such file or directory
	Sep 15 19:01:52 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:52.682770    5947 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/64d9684555b7151aa1230886a330403a1719214b8e7de95bf163ab366792ee0a/diff" to get inode usage: stat /var/lib/docker/overlay2/64d9684555b7151aa1230886a330403a1719214b8e7de95bf163ab366792ee0a/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/5ccb2f65220cfc69b1d4149f4d9bdc333429077d5d31b3f94823b56050843cde" to get inode usage: stat /var/lib/docker/containers/5ccb2f65220cfc69b1d4149f4d9bdc333429077d5d31b3f94823b56050843cde: no such file or directory
	Sep 15 19:01:52 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:52.888701    5947 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-kz448 through plugin: invalid network status for"
	Sep 15 19:01:53 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:53.396219    5947 scope.go:110] "RemoveContainer" containerID="20d45c87c622b52056f8e55ae3e0ee1a54d32bdf3f2599543f29c344e910a66c"
	Sep 15 19:02:35 functional-20210915185528-22848 kubelet[5947]: I0915 19:02:35.267866    5947 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 19:02:35 functional-20210915185528-22848 kubelet[5947]: I0915 19:02:35.517667    5947 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdvqt\" (UniqueName: \"kubernetes.io/projected/5db4631a-6341-4ed5-b14f-a19ecbbf28a8-kube-api-access-mdvqt\") pod \"hello-node-6cbfcd7cbc-6zxnx\" (UID: \"5db4631a-6341-4ed5-b14f-a19ecbbf28a8\") "
	Sep 15 19:02:38 functional-20210915185528-22848 kubelet[5947]: I0915 19:02:38.489966    5947 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-6zxnx through plugin: invalid network status for"
	Sep 15 19:02:38 functional-20210915185528-22848 kubelet[5947]: I0915 19:02:38.497385    5947 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0e748e18ddb7fd23242d988eddb7ffcc92b7e25185da03c987647ee05e83a159"
	Sep 15 19:02:39 functional-20210915185528-22848 kubelet[5947]: I0915 19:02:39.537819    5947 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-6zxnx through plugin: invalid network status for"
	Sep 15 19:03:15 functional-20210915185528-22848 kubelet[5947]: I0915 19:03:15.497389    5947 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-6zxnx through plugin: invalid network status for"
	
	* 
	* ==> storage-provisioner [e8b4bda89d6c] <==
	* I0915 19:01:13.158761       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0915 19:01:13.168482       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [fadd5d0d97af] <==
	* I0915 19:01:27.299909       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 19:01:33.258101       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 19:01:33.258235       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 19:01:50.826826       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 19:01:50.827584       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ad39bb8-bb45-4cbe-ab94-bc169d169a59", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210915185528-22848_4cc74288-3ad3-4030-8969-9352d5ace344 became leader
	I0915 19:01:50.828838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210915185528-22848_4cc74288-3ad3-4030-8969-9352d5ace344!
	I0915 19:01:50.929360       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20210915185528-22848_4cc74288-3ad3-4030-8969-9352d5ace344!
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect functional-20210915185528-22848 --format={{.State.Status}}" took an unusually long time: 2.1514247s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20210915185528-22848 -n functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20210915185528-22848 -n functional-20210915185528-22848: (6.293305s)
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20210915185528-22848 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestFunctional/parallel/StatusCmd]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20210915185528-22848 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context functional-20210915185528-22848 describe pod : exit status 1 (231.8403ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context functional-20210915185528-22848 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/StatusCmd (54.07s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (41.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:281: (dbg) Run:  docker pull busybox:1.31
functional_test.go:281: (dbg) Done: docker pull busybox:1.31: (3.7789416s)
functional_test.go:288: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:295: (dbg) Run:  docker save -o busybox-load.tar docker.io/library/busybox:load-from-file-functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:295: (dbg) Done: docker save -o busybox-load.tar docker.io/library/busybox:load-from-file-functional-20210915185528-22848: (1.0784421s)
functional_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image load C:\jenkins\workspace\Docker_Windows_integration\busybox-load.tar
E0915 19:03:32.286388   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image load C:\jenkins\workspace\Docker_Windows_integration\busybox-load.tar: (7.0025041s)
functional_test.go:308: loading image into minikube: <nil>

                                                
                                                
** stderr ** 
	! Executing "docker container inspect functional-20210915185528-22848 --format={{.State.Status}}" took an unusually long time: 2.3810245s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/parallel/LoadImageFromFile]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
helpers_test.go:236: (dbg) docker inspect functional-20210915185528-22848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce",
	        "Created": "2021-09-15T18:55:42.6785746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27207,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T18:55:44.5043231Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce/hostname",
	        "HostsPath": "/var/lib/docker/containers/911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce/hosts",
	        "LogPath": "/var/lib/docker/containers/911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce/911613e33eedd633c7db5193f77cefe3560041e25a83356424e967416578b7ce-json.log",
	        "Name": "/functional-20210915185528-22848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20210915185528-22848:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20210915185528-22848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7b6d40e371d77fb8dadecd60798ef83c7557fe3841ddf04cd4f88525b795a293-init/diff:/var/lib/docker/overlay2/a259804ff45c264548e9459111f8eb7e789339b3253b50b62afde896e9e19e34/diff:/var/lib/docker/overlay2/61882a81480713e64bf02bef67583a0609b2be0589d08187547a88789584af86/diff:/var/lib/docker/overlay2/a41d1f5e24156c1d438fe25c567f3c3492d15cb77b1bf5545be9086be845138a/diff:/var/lib/docker/overlay2/86e30e10438032d0a02b54850ad0316347488f3d5b831234af1e91f943269850/diff:/var/lib/docker/overlay2/f6962936c0c1b0636454847e8e963a472786602e15a00d5e020827c2372acfce/diff:/var/lib/docker/overlay2/5eee83c6029359aefecbba85cc6d456e3a5a97c3ef6e9f4850e8a53c62b30ef5/diff:/var/lib/docker/overlay2/fdaa4e134ab960962e0a388adaa3a6aa59dd139cc016dfd4cdf4565bc80e8469/diff:/var/lib/docker/overlay2/9e1b9be7e17136fa81b0a224e2fab9704d3234ca119d87c14f9a676bbdb023f5/diff:/var/lib/docker/overlay2/ffe06185e93cb7ae8d48d84ea9be8817f2ae3d2aae85114ce41477579e23debd/diff:/var/lib/docker/overlay2/221713
20a621ffe79c2acb0c13308b1b0cd3bc94a4083992e7b8589b820c625c/diff:/var/lib/docker/overlay2/eb2fb3ccafd6cb1c26a9642601357b3e0563e9e9361a5ab359bf1af592a0d709/diff:/var/lib/docker/overlay2/6081368e802a14f6f6a7424eb7af3f5f29f85bf59ed0a0709ce25b53738095cb/diff:/var/lib/docker/overlay2/fd7176e5912a824a0543fa3ab5170921538a287401ff8a451c90e1ef0fd8adea/diff:/var/lib/docker/overlay2/eec5078968f5e7332ff82191a780be0efef38aef75ea7cd67723ab3d2760c281/diff:/var/lib/docker/overlay2/d18d41a44c04cb695c4b69ac0db0d5807cee4ca8a5a695629f97e2d8d9cf9461/diff:/var/lib/docker/overlay2/b125406c01cea6a83fa5515a19bb6822d1194fcd47eeb1ed541b9304804a54be/diff:/var/lib/docker/overlay2/b49ae7a2c3101c5b094f611e08fb7b68d8688cb3c333066f697aafc1dc7c2c7e/diff:/var/lib/docker/overlay2/ce599106d279966257baab0cc43ed0366d690702b449073e812a47ae6698dedf/diff:/var/lib/docker/overlay2/5f005c2e8ab4cd52b59f5118e6f5e352dd834afde547ba1ee7b71141319e3547/diff:/var/lib/docker/overlay2/2b1f9abca5d32e21fe1da66b2604d858599b74fc9359bd55e050cebccaba5c7d/diff:/var/lib/d
ocker/overlay2/a5f956d0de2a0313dfbaefb921518d8a75267b71a9e7c68207a81682db5394b5/diff:/var/lib/docker/overlay2/e0050af32b9eb0f12404cf384139cd48050d4a969d090faaa07b9f42fe954627/diff:/var/lib/docker/overlay2/f18c15fd90b361f7a13265b5426d985a47e261abde790665028916551b5218f3/diff:/var/lib/docker/overlay2/0f266ad6b65c857206fd10e121b74564370ca213f5706493619b6a590c496660/diff:/var/lib/docker/overlay2/fc044060d3681022984120753b0c02afc05afbb256dbdfc9f7f5e966e1d98820/diff:/var/lib/docker/overlay2/91df5011d1388013be2af7bb3097195366fd38d1f46d472e630aab583779f7c0/diff:/var/lib/docker/overlay2/f810a7fbc880b9ff7c367b14e34088e851fa045d860ce4bf4c49999fcf814a6e/diff:/var/lib/docker/overlay2/318584cae4acc059b81627e00ae703167673c73d234d6e64e894fc3500750f90/diff:/var/lib/docker/overlay2/a2e1d86ffb5aec517fe891619294d506621a002f4c53e8d3103d5d4ce777ebaf/diff:/var/lib/docker/overlay2/12fd1d215a6881aa03a06f2b8a5415b483530db121b120b66940e1e5cd2e1b96/diff:/var/lib/docker/overlay2/28bbbfc0404aecb7d7d79b4c2bfec07cd44260c922a982af523bda70bbd
7be20/diff:/var/lib/docker/overlay2/4dc0077174d58a8904abddfc67a48e6dd082a1eebc72518af19da37b4eff7b2c/diff:/var/lib/docker/overlay2/4d39db844b44258dbb67b16662175b453df7bfd43274abbf1968486539955750/diff:/var/lib/docker/overlay2/ca34d73c6c31358a3eb714a014a5961863e05dee505a1cfca2c8829380ce362b/diff:/var/lib/docker/overlay2/0c0595112799a0b3604c58158946fb3d0657c4198a6a72e12fbe29a74174d3ea/diff:/var/lib/docker/overlay2/5fc43276da56e90293816918613014e7cec7bedc292a062d39d034c95d56351d/diff:/var/lib/docker/overlay2/71a282cb60752128ee370ced1695c67c421341d364956818e5852fd6714a0e64/diff:/var/lib/docker/overlay2/07723c7054e35caae4987fa66d3d1fd44de0d2875612274dde2bf04e8349b0a0/diff:/var/lib/docker/overlay2/0433db88749fb49b0f02cc65b7113c97134270991a8a82bbe7ff4432aae7e502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b6d40e371d77fb8dadecd60798ef83c7557fe3841ddf04cd4f88525b795a293/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b6d40e371d77fb8dadecd60798ef83c7557fe3841ddf04cd4f88525b795a293/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b6d40e371d77fb8dadecd60798ef83c7557fe3841ddf04cd4f88525b795a293/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20210915185528-22848",
	                "Source": "/var/lib/docker/volumes/functional-20210915185528-22848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20210915185528-22848",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20210915185528-22848",
	                "name.minikube.sigs.k8s.io": "functional-20210915185528-22848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f05e6e475cd82f2595580c74ef64163f4cb22eab4619e370e687b6e1ec37349",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55730"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55731"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55732"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55733"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55734"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6f05e6e475cd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20210915185528-22848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "911613e33eed",
	                        "functional-20210915185528-22848"
	                    ],
	                    "NetworkID": "ca8cf024d05080a281a892a5461499862486e1e4ca113d318b5bc1d53d1bfb31",
	                    "EndpointID": "3676fc7dcc4d6c0a54b926bb37cbd4b7dec1a2d4c32b3f63c6918c551c05e699",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20210915185528-22848 -n functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20210915185528-22848 -n functional-20210915185528-22848: (6.6476683s)
helpers_test.go:245: <<< TestFunctional/parallel/LoadImageFromFile FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/LoadImageFromFile]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 logs -n 25

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 logs -n 25: (13.0541229s)
helpers_test.go:253: TestFunctional/parallel/LoadImageFromFile logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|---------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                                  Args                                  |             Profile             |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------------------|---------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:32 GMT | Wed, 15 Sep 2021 19:02:37 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/22848.pem                                               |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:37 GMT | Wed, 15 Sep 2021 19:02:43 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /usr/share/ca-certificates/22848.pem                                   |                                 |                         |         |                               |                               |
	| profile | list --output json                                                     | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:38 GMT | Wed, 15 Sep 2021 19:02:44 GMT |
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:43 GMT | Wed, 15 Sep 2021 19:02:49 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/51391683.0                                              |                                 |                         |         |                               |                               |
	| profile | list                                                                   | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:45 GMT | Wed, 15 Sep 2021 19:02:51 GMT |
	| profile | list -l                                                                | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:51 GMT | Wed, 15 Sep 2021 19:02:51 GMT |
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:50 GMT | Wed, 15 Sep 2021 19:02:55 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/228482.pem                                              |                                 |                         |         |                               |                               |
	| profile | list -o json                                                           | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:52 GMT | Wed, 15 Sep 2021 19:02:57 GMT |
	| profile | list -o json --light                                                   | minikube                        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:58 GMT | Wed, 15 Sep 2021 19:02:58 GMT |
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:55 GMT | Wed, 15 Sep 2021 19:03:00 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /usr/share/ca-certificates/228482.pem                                  |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:01 GMT | Wed, 15 Sep 2021 19:03:06 GMT |
	|         | ssh sudo cat                                                           |                                 |                         |         |                               |                               |
	|         | /etc/ssl/certs/3ec20f2e.0                                              |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848 image load                             | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:04 GMT | Wed, 15 Sep 2021 19:03:13 GMT |
	|         | docker.io/library/busybox:remove-functional-20210915185528-22848       |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:13 GMT | Wed, 15 Sep 2021 19:03:17 GMT |
	|         | image ls                                                               |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:02:59 GMT | Wed, 15 Sep 2021 19:03:17 GMT |
	|         | logs -n 25                                                             |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848 image rm                               | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:13 GMT | Wed, 15 Sep 2021 19:03:17 GMT |
	|         | docker.io/library/busybox:remove-functional-20210915185528-22848       |                                 |                         |         |                               |                               |
	| ssh     | -p                                                                     | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:19 GMT | Wed, 15 Sep 2021 19:03:24 GMT |
	|         | functional-20210915185528-22848                                        |                                 |                         |         |                               |                               |
	|         | -- docker images                                                       |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:24 GMT | Wed, 15 Sep 2021 19:03:30 GMT |
	|         | service list                                                           |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848 image build -t                         | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:17 GMT | Wed, 15 Sep 2021 19:03:30 GMT |
	|         | localhost/my-image:functional-20210915185528-22848                     |                                 |                         |         |                               |                               |
	|         | testdata\build                                                         |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:25 GMT | Wed, 15 Sep 2021 19:03:32 GMT |
	|         | image pull                                                             |                                 |                         |         |                               |                               |
	|         | docker.io/library/busybox:1.30                                         |                                 |                         |         |                               |                               |
	| ssh     | -p functional-20210915185528-22848                                     | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:30 GMT | Wed, 15 Sep 2021 19:03:36 GMT |
	|         | -- docker image inspect                                                |                                 |                         |         |                               |                               |
	|         | localhost/my-image:functional-20210915185528-22848                     |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848 image                                  | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:32 GMT | Wed, 15 Sep 2021 19:03:37 GMT |
	|         | tag docker.io/library/busybox:1.30                                     |                                 |                         |         |                               |                               |
	|         | docker.io/library/busybox:save-to-file-functional-20210915185528-22848 |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848 image load                             | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:31 GMT | Wed, 15 Sep 2021 19:03:38 GMT |
	|         | C:\jenkins\workspace\Docker_Windows_integration\busybox-load.tar       |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848                                        | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:31 GMT | Wed, 15 Sep 2021 19:03:39 GMT |
	|         | image pull                                                             |                                 |                         |         |                               |                               |
	|         | docker.io/library/busybox:1.29                                         |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848 image save                             | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:38 GMT | Wed, 15 Sep 2021 19:03:42 GMT |
	|         | docker.io/library/busybox:save-to-file-functional-20210915185528-22848 |                                 |                         |         |                               |                               |
	|         | C:\jenkins\workspace\Docker_Windows_integration\busybox-save.tar       |                                 |                         |         |                               |                               |
	| -p      | functional-20210915185528-22848 image                                  | functional-20210915185528-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:03:39 GMT | Wed, 15 Sep 2021 19:03:43 GMT |
	|         | tag docker.io/library/busybox:1.29                                     |                                 |                         |         |                               |                               |
	|         | docker.io/library/busybox:save-functional-20210915185528-22848         |                                 |                         |         |                               |                               |
	|---------|------------------------------------------------------------------------|---------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 19:00:13
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 19:00:13.536463    7552 out.go:298] Setting OutFile to fd 1676 ...
	I0915 19:00:13.537449    7552 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:00:13.537449    7552 out.go:311] Setting ErrFile to fd 1660...
	I0915 19:00:13.537449    7552 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:00:13.553464    7552 out.go:305] Setting JSON to false
	I0915 19:00:13.557437    7552 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9151887,"bootTime":1622580526,"procs":154,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 19:00:13.558542    7552 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 19:00:13.563105    7552 out.go:177] * [functional-20210915185528-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 19:00:13.563495    7552 notify.go:169] Checking for updates...
	I0915 19:00:13.565280    7552 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 19:00:13.567646    7552 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 19:00:13.570489    7552 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 19:00:13.572274    7552 config.go:177] Loaded profile config "functional-20210915185528-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:00:13.572667    7552 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 19:00:15.458820    7552 docker.go:132] docker version: linux-20.10.5
	I0915 19:00:15.472365    7552 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 19:00:16.441409    7552 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:54 SystemTime:2021-09-15 19:00:16.0009166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 19:00:16.447981    7552 out.go:177] * Using the docker driver based on existing profile
	I0915 19:00:16.448196    7552 start.go:278] selected driver: docker
	I0915 19:00:16.448196    7552 start.go:751] validating driver "docker" against &{Name:functional-20210915185528-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAdd
onImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 19:00:16.448196    7552 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 19:00:16.474651    7552 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 19:00:17.475896    7552 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0002566s)
	I0915 19:00:17.476321    7552 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:54 SystemTime:2021-09-15 19:00:17.0539245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 19:00:17.555122    7552 start_flags.go:737] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 19:00:17.555296    7552 cni.go:93] Creating CNI manager for ""
	I0915 19:00:17.555296    7552 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 19:00:17.555296    7552 start_flags.go:278] config:
	{Name:functional-20210915185528-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddo
nImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 19:00:17.559265    7552 out.go:177] * Starting control plane node functional-20210915185528-22848 in cluster functional-20210915185528-22848
	I0915 19:00:17.559458    7552 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 19:00:17.563990    7552 out.go:177] * Pulling base image ...
	I0915 19:00:17.564904    7552 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 19:00:17.564904    7552 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 19:00:17.565861    7552 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
	I0915 19:00:17.565861    7552 cache.go:57] Caching tarball of preloaded images
	I0915 19:00:17.566462    7552 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 19:00:17.567659    7552 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.1 on docker
	I0915 19:00:17.567943    7552 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\config.json ...
	I0915 19:00:18.244496    7552 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
	I0915 19:00:18.245061    7552 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
	I0915 19:00:18.245061    7552 cache.go:206] Successfully downloaded all kic artifacts
	I0915 19:00:18.245273    7552 start.go:313] acquiring machines lock for functional-20210915185528-22848: {Name:mkfee538efeff8d31b07f831bd3064dcc53fbc7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 19:00:18.245898    7552 start.go:317] acquired machines lock for "functional-20210915185528-22848" in 382.6µs
	I0915 19:00:18.246105    7552 start.go:93] Skipping create...Using existing machine configuration
	I0915 19:00:18.246105    7552 fix.go:55] fixHost starting: 
	I0915 19:00:18.275222    7552 cli_runner.go:115] Run: docker container inspect functional-20210915185528-22848 --format={{.State.Status}}
	I0915 19:00:18.874750    7552 fix.go:108] recreateIfNeeded on functional-20210915185528-22848: state=Running err=<nil>
	W0915 19:00:18.874750    7552 fix.go:134] unexpected machine state, will restart: <nil>
	I0915 19:00:18.878702    7552 out.go:177] * Updating the running docker "functional-20210915185528-22848" container ...
	I0915 19:00:18.879416    7552 machine.go:88] provisioning docker machine ...
	I0915 19:00:18.879416    7552 ubuntu.go:169] provisioning hostname "functional-20210915185528-22848"
	I0915 19:00:18.893166    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:19.555220    7552 main.go:130] libmachine: Using SSH client type: native
	I0915 19:00:19.555921    7552 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 55730 <nil> <nil>}
	I0915 19:00:19.555921    7552 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20210915185528-22848 && echo "functional-20210915185528-22848" | sudo tee /etc/hostname
	I0915 19:00:19.936181    7552 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20210915185528-22848
	
	I0915 19:00:19.949821    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:20.612389    7552 main.go:130] libmachine: Using SSH client type: native
	I0915 19:00:20.612820    7552 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 55730 <nil> <nil>}
	I0915 19:00:20.612962    7552 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20210915185528-22848' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20210915185528-22848/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20210915185528-22848' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 19:00:20.970994    7552 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 19:00:20.970994    7552 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0915 19:00:20.970994    7552 ubuntu.go:177] setting up certificates
	I0915 19:00:20.971291    7552 provision.go:83] configureAuth start
	I0915 19:00:21.000300    7552 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210915185528-22848
	I0915 19:00:21.622477    7552 provision.go:138] copyHostCerts
	I0915 19:00:21.623254    7552 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0915 19:00:21.623472    7552 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0915 19:00:21.623757    7552 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 19:00:21.625951    7552 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0915 19:00:21.625951    7552 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0915 19:00:21.626248    7552 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 19:00:21.627830    7552 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0915 19:00:21.627830    7552 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0915 19:00:21.628322    7552 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1675 bytes)
	I0915 19:00:21.630091    7552 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-20210915185528-22848 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20210915185528-22848]
	I0915 19:00:21.847786    7552 provision.go:172] copyRemoteCerts
	I0915 19:00:21.859789    7552 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 19:00:21.870571    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:22.517377    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:00:22.722975    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 19:00:22.816822    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0915 19:00:22.902055    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 19:00:22.982537    7552 provision.go:86] duration metric: configureAuth took 2.0112586s
	I0915 19:00:22.982537    7552 ubuntu.go:193] setting minikube options for container-runtime
	I0915 19:00:22.983056    7552 config.go:177] Loaded profile config "functional-20210915185528-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:00:23.002587    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:23.632065    7552 main.go:130] libmachine: Using SSH client type: native
	I0915 19:00:23.632486    7552 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 55730 <nil> <nil>}
	I0915 19:00:23.632656    7552 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 19:00:23.993806    7552 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 19:00:23.993806    7552 ubuntu.go:71] root file system type: overlay
	I0915 19:00:23.994788    7552 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 19:00:24.008709    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:24.634751    7552 main.go:130] libmachine: Using SSH client type: native
	I0915 19:00:24.635019    7552 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 55730 <nil> <nil>}
	I0915 19:00:24.635372    7552 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 19:00:25.003838    7552 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 19:00:25.023561    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:25.626350    7552 main.go:130] libmachine: Using SSH client type: native
	I0915 19:00:25.626350    7552 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 55730 <nil> <nil>}
	I0915 19:00:25.626350    7552 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 19:00:25.989874    7552 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 19:00:25.989874    7552 machine.go:91] provisioned docker machine in 7.1105034s
	I0915 19:00:25.989874    7552 start.go:267] post-start starting for "functional-20210915185528-22848" (driver="docker")
	I0915 19:00:25.989874    7552 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 19:00:26.010030    7552 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 19:00:26.028601    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:26.644968    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:00:26.895208    7552 ssh_runner.go:152] Run: cat /etc/os-release
	I0915 19:00:26.915232    7552 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 19:00:26.915232    7552 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 19:00:26.915232    7552 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 19:00:26.915232    7552 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0915 19:00:26.915232    7552 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0915 19:00:26.916206    7552 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0915 19:00:26.916206    7552 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem -> 228482.pem in /etc/ssl/certs
	I0915 19:00:26.917205    7552 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\test\nested\copy\22848\hosts -> hosts in /etc/test/nested/copy/22848
	I0915 19:00:26.929206    7552 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/22848
	I0915 19:00:26.965891    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem --> /etc/ssl/certs/228482.pem (1708 bytes)
	I0915 19:00:27.040972    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\test\nested\copy\22848\hosts --> /etc/test/nested/copy/22848/hosts (40 bytes)
	I0915 19:00:27.112983    7552 start.go:270] post-start completed in 1.1231159s
	I0915 19:00:27.129755    7552 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 19:00:27.139480    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:27.781594    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:00:28.010129    7552 fix.go:57] fixHost completed within 9.7640872s
	I0915 19:00:28.010129    7552 start.go:80] releasing machines lock for "functional-20210915185528-22848", held for 9.764294s
	I0915 19:00:28.032189    7552 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210915185528-22848
	I0915 19:00:28.651617    7552 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0915 19:00:28.665728    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:28.666585    7552 ssh_runner.go:152] Run: systemctl --version
	I0915 19:00:28.676495    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:29.345614    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:00:29.422816    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:00:29.645778    7552 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I0915 19:00:29.838434    7552 ssh_runner.go:192] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.1865719s)
	I0915 19:00:29.847432    7552 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 19:00:29.908656    7552 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I0915 19:00:29.929638    7552 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I0915 19:00:29.991467    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 19:00:30.074961    7552 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I0915 19:00:30.440840    7552 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I0915 19:00:30.766843    7552 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 19:00:30.823358    7552 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I0915 19:00:31.140281    7552 ssh_runner.go:152] Run: sudo systemctl start docker
	I0915 19:00:31.203105    7552 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 19:00:31.399767    7552 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 19:00:31.574649    7552 out.go:204] * Preparing Kubernetes v1.22.1 on Docker 20.10.8 ...
	I0915 19:00:31.589438    7552 cli_runner.go:115] Run: docker exec -t functional-20210915185528-22848 dig +short host.docker.internal
	I0915 19:00:32.620973    7552 cli_runner.go:168] Completed: docker exec -t functional-20210915185528-22848 dig +short host.docker.internal: (1.0315411s)
	I0915 19:00:32.620973    7552 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0915 19:00:32.645718    7552 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0915 19:00:32.727408    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:33.408286    7552 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0915 19:00:33.408540    7552 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 19:00:33.425022    7552 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 19:00:33.600991    7552 docker.go:558] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20210915185528-22848
	k8s.gcr.io/kube-apiserver:v1.22.1
	k8s.gcr.io/kube-scheduler:v1.22.1
	k8s.gcr.io/kube-proxy:v1.22.1
	k8s.gcr.io/kube-controller-manager:v1.22.1
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/pause:3.3
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0915 19:00:33.600991    7552 docker.go:489] Images already preloaded, skipping extraction
	I0915 19:00:33.610210    7552 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 19:00:33.886915    7552 docker.go:558] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20210915185528-22848
	k8s.gcr.io/kube-apiserver:v1.22.1
	k8s.gcr.io/kube-controller-manager:v1.22.1
	k8s.gcr.io/kube-proxy:v1.22.1
	k8s.gcr.io/kube-scheduler:v1.22.1
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/pause:3.3
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0915 19:00:33.886915    7552 cache_images.go:78] Images are preloaded, skipping loading
	I0915 19:00:33.901609    7552 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
	I0915 19:00:34.304232    7552 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0915 19:00:34.304353    7552 cni.go:93] Creating CNI manager for ""
	I0915 19:00:34.304353    7552 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 19:00:34.304353    7552 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0915 19:00:34.304550    7552 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.22.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20210915185528-22848 NodeName:functional-20210915185528-22848 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0915 19:00:34.305202    7552 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20210915185528-22848"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 19:00:34.305820    7552 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20210915185528-22848 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0915 19:00:34.332407    7552 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.1
	I0915 19:00:34.398521    7552 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 19:00:34.416440    7552 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 19:00:34.453371    7552 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0915 19:00:34.521837    7552 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 19:00:34.587876    7552 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1924 bytes)
	I0915 19:00:34.671732    7552 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 19:00:34.697110    7552 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848 for IP: 192.168.49.2
	I0915 19:00:34.697407    7552 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0915 19:00:34.697773    7552 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0915 19:00:34.698398    7552 certs.go:293] skipping minikube-user signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.key
	I0915 19:00:34.698671    7552 certs.go:293] skipping minikube signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\apiserver.key.dd3b5fb2
	I0915 19:00:34.699219    7552 certs.go:293] skipping aggregator signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\proxy-client.key
	I0915 19:00:34.700978    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22848.pem (1338 bytes)
	W0915 19:00:34.701333    7552 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22848_empty.pem, impossibly tiny 0 bytes
	I0915 19:00:34.701481    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0915 19:00:34.701816    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0915 19:00:34.702043    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0915 19:00:34.702303    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0915 19:00:34.702977    7552 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem (1708 bytes)
	I0915 19:00:34.712218    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0915 19:00:34.811175    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 19:00:34.928177    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 19:00:35.032775    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 19:00:35.113323    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 19:00:35.183330    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0915 19:00:35.270029    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 19:00:35.352451    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 19:00:35.437270    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\22848.pem --> /usr/share/ca-certificates/22848.pem (1338 bytes)
	I0915 19:00:35.527466    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem --> /usr/share/ca-certificates/228482.pem (1708 bytes)
	I0915 19:00:35.605186    7552 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 19:00:35.693505    7552 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 19:00:35.793720    7552 ssh_runner.go:152] Run: openssl version
	I0915 19:00:35.847506    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22848.pem && ln -fs /usr/share/ca-certificates/22848.pem /etc/ssl/certs/22848.pem"
	I0915 19:00:35.908543    7552 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/22848.pem
	I0915 19:00:35.936176    7552 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Sep 15 18:55 /usr/share/ca-certificates/22848.pem
	I0915 19:00:35.961580    7552 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22848.pem
	I0915 19:00:36.025066    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22848.pem /etc/ssl/certs/51391683.0"
	I0915 19:00:36.083877    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228482.pem && ln -fs /usr/share/ca-certificates/228482.pem /etc/ssl/certs/228482.pem"
	I0915 19:00:36.139683    7552 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/228482.pem
	I0915 19:00:36.164246    7552 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Sep 15 18:55 /usr/share/ca-certificates/228482.pem
	I0915 19:00:36.178553    7552 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228482.pem
	I0915 19:00:36.233689    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228482.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 19:00:36.290690    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 19:00:36.354396    7552 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 19:00:36.373414    7552 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 18:34 /usr/share/ca-certificates/minikubeCA.pem
	I0915 19:00:36.390300    7552 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 19:00:36.433757    7552 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 19:00:36.485753    7552 kubeadm.go:390] StartCluster: {Name:functional-20210915185528-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage
-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 19:00:36.511758    7552 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 19:00:36.643633    7552 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 19:00:36.682590    7552 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0915 19:00:36.682590    7552 kubeadm.go:600] restartCluster start
	I0915 19:00:36.697377    7552 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I0915 19:00:36.745944    7552 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 19:00:36.771293    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:00:37.428475    7552 kubeconfig.go:93] found "functional-20210915185528-22848" server: "https://127.0.0.1:55734"
	I0915 19:00:37.463562    7552 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 19:00:37.511123    7552 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-09-15 18:57:49.316221000 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-09-15 19:00:34.635703000 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0915 19:00:37.511123    7552 kubeadm.go:1032] stopping kube-system containers ...
	I0915 19:00:37.525521    7552 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 19:00:37.665632    7552 docker.go:390] Stopping containers: [a3ab1112f8c0 be181f506c70 6c33ffd51974 0aa327ea6855 5834041e16ab a49d3bd8af1b 1efadde39b35 08710a1f70a2 6cb84dfa0246 2d654b63f5b3 38322ac06c0e 59fe5e6148f1 d8a56d9a35cc 4c237c5570c4]
	I0915 19:00:37.679550    7552 ssh_runner.go:152] Run: docker stop a3ab1112f8c0 be181f506c70 6c33ffd51974 0aa327ea6855 5834041e16ab a49d3bd8af1b 1efadde39b35 08710a1f70a2 6cb84dfa0246 2d654b63f5b3 38322ac06c0e 59fe5e6148f1 d8a56d9a35cc 4c237c5570c4
	I0915 19:00:45.058982    7552 ssh_runner.go:192] Completed: docker stop a3ab1112f8c0 be181f506c70 6c33ffd51974 0aa327ea6855 5834041e16ab a49d3bd8af1b 1efadde39b35 08710a1f70a2 6cb84dfa0246 2d654b63f5b3 38322ac06c0e 59fe5e6148f1 d8a56d9a35cc 4c237c5570c4: (7.3789782s)
	I0915 19:00:45.080360    7552 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I0915 19:00:45.891251    7552 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 19:00:46.056331    7552 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Sep 15 18:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 15 18:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep 15 18:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep 15 18:57 /etc/kubernetes/scheduler.conf
	
	I0915 19:00:46.070693    7552 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0915 19:00:46.187567    7552 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0915 19:00:46.382892    7552 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0915 19:00:46.503248    7552 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 19:00:46.518243    7552 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 19:00:46.603710    7552 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0915 19:00:46.669984    7552 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 19:00:46.680853    7552 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 19:00:46.737395    7552 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 19:00:46.788027    7552 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0915 19:00:46.788027    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:00:47.212049    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:00:49.915677    7552 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.7036449s)
	I0915 19:00:49.915677    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:00:50.479922    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:00:50.759031    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:00:50.991798    7552 api_server.go:50] waiting for apiserver process to appear ...
	I0915 19:00:51.022551    7552 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 19:00:51.092269    7552 api_server.go:70] duration metric: took 100.4716ms to wait for apiserver process to appear ...
	I0915 19:00:51.092269    7552 api_server.go:86] waiting for apiserver healthz status ...
	I0915 19:00:51.092740    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:00:56.096018    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 19:00:56.597717    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:01.600665    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 19:01:02.101086    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:07.102908    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 19:01:07.597196    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:08.758430    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 19:01:08.758430    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 19:01:09.096713    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:09.210461    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:09.210461    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:09.598437    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:09.675399    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:09.675399    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:10.097529    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:10.269864    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:10.269864    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:10.596847    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:10.697126    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:10.697126    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:11.097913    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:11.488027    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:11.597688    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:11.624403    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:12.097690    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:12.114143    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:12.597572    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:12.616329    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:13.098124    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:13.107265    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:13.596637    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:13.617341    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:14.096776    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:14.114375    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:14.597055    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:14.609505    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:15.096971    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:15.104308    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:15.596952    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:15.604690    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:16.096534    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:16.102563    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:16.596762    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:16.604667    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:17.097264    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:17.108386    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:17.596332    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:17.604644    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:18.096524    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:18.109790    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:18.596288    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:18.605108    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:19.096925    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:19.104242    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:19.598206    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:19.607880    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:20.097120    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:20.107005    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:20.598310    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:20.605057    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:21.096803    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:21.104367    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:21.596748    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:21.603998    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:22.097809    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:22.104561    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:22.596737    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:22.604429    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:23.097396    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:23.106109    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:23.596481    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:23.607400    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:24.098847    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:24.115445    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": EOF
	I0915 19:01:24.596523    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:29.597692    7552 api_server.go:255] stopped: https://127.0.0.1:55734/healthz: Get "https://127.0.0.1:55734/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 19:01:29.598060    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:32.874687    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:32.875153    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:33.096218    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:33.179763    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:33.179763    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:33.596289    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:33.764419    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:33.764419    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:34.095934    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:34.178744    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:34.178744    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:34.596581    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:34.675732    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:34.676074    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:35.096004    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:35.186695    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0915 19:01:35.186695    7552 api_server.go:101] status: https://127.0.0.1:55734/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0915 19:01:35.596578    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:01:35.628009    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 200:
	ok
	I0915 19:01:35.671034    7552 api_server.go:139] control plane version: v1.22.1
	I0915 19:01:35.671034    7552 api_server.go:129] duration metric: took 44.5790507s to wait for apiserver health ...
	I0915 19:01:35.671034    7552 cni.go:93] Creating CNI manager for ""
	I0915 19:01:35.671034    7552 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 19:01:35.671449    7552 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 19:01:35.716456    7552 system_pods.go:59] 7 kube-system pods found
	I0915 19:01:35.716456    7552 system_pods.go:61] "coredns-78fcd69978-kz448" [02c2eb54-ff80-44e8-8801-802e1dc5625f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0915 19:01:35.716456    7552 system_pods.go:61] "etcd-functional-20210915185528-22848" [a5e1a1e8-31bc-44a8-8f69-78a87dd8bfeb] Running
	I0915 19:01:35.716456    7552 system_pods.go:61] "kube-apiserver-functional-20210915185528-22848" [b6c28e24-2401-4fe3-ab11-91ea44668cd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0915 19:01:35.716456    7552 system_pods.go:61] "kube-controller-manager-functional-20210915185528-22848" [344a80f2-afc1-414e-8446-e8eeff71ec5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0915 19:01:35.716456    7552 system_pods.go:61] "kube-proxy-75lgx" [f51bd18e-0ce7-4813-ba9e-d2eedb280750] Running
	I0915 19:01:35.716456    7552 system_pods.go:61] "kube-scheduler-functional-20210915185528-22848" [d7d0d620-55ad-4a82-994c-0f03657affe9] Running
	I0915 19:01:35.716456    7552 system_pods.go:61] "storage-provisioner" [0079c81b-04ae-439f-be17-1a1ba8697238] Running
	I0915 19:01:35.716456    7552 system_pods.go:74] duration metric: took 45.008ms to wait for pod list to return data ...
	I0915 19:01:35.716456    7552 node_conditions.go:102] verifying NodePressure condition ...
	I0915 19:01:35.731769    7552 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 19:01:35.731890    7552 node_conditions.go:123] node cpu capacity is 4
	I0915 19:01:35.731890    7552 node_conditions.go:105] duration metric: took 15.434ms to run NodePressure ...
	I0915 19:01:35.732126    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 19:01:36.439081    7552 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0915 19:01:36.476002    7552 kubeadm.go:746] kubelet initialised
	I0915 19:01:36.476002    7552 kubeadm.go:747] duration metric: took 36.9216ms waiting for restarted kubelet to initialise ...
	I0915 19:01:36.476002    7552 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 19:01:36.519211    7552 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-kz448" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:38.599784    7552 pod_ready.go:102] pod "coredns-78fcd69978-kz448" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:40.097774    7552 pod_ready.go:92] pod "coredns-78fcd69978-kz448" in "kube-system" namespace has status "Ready":"True"
	I0915 19:01:40.097949    7552 pod_ready.go:81] duration metric: took 3.5787608s waiting for pod "coredns-78fcd69978-kz448" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:40.097949    7552 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:40.128033    7552 pod_ready.go:92] pod "etcd-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:01:40.128033    7552 pod_ready.go:81] duration metric: took 30.0837ms waiting for pod "etcd-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:40.128033    7552 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:40.166047    7552 pod_ready.go:92] pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:01:40.166047    7552 pod_ready.go:81] duration metric: took 38.0148ms waiting for pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:40.166047    7552 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:01:42.324900    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:44.759614    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:47.250491    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:49.253792    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:51.260700    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:53.789932    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:56.269062    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:01:58.751404    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:02:00.754716    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:02:02.762317    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:02:05.264452    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:02:07.302728    7552 pod_ready.go:102] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"False"
	I0915 19:02:09.755338    7552 pod_ready.go:92] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:09.755338    7552 pod_ready.go:81] duration metric: took 29.5894804s waiting for pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:09.755338    7552 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-75lgx" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:09.803007    7552 pod_ready.go:92] pod "kube-proxy-75lgx" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:09.803007    7552 pod_ready.go:81] duration metric: took 47.6688ms waiting for pod "kube-proxy-75lgx" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:09.803007    7552 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:09.838680    7552 pod_ready.go:92] pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:09.838680    7552 pod_ready.go:81] duration metric: took 35.6733ms waiting for pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:09.838680    7552 pod_ready.go:38] duration metric: took 33.3628915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 19:02:09.838853    7552 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 19:02:09.926642    7552 ops.go:34] apiserver oom_adj: -16
	I0915 19:02:09.926642    7552 kubeadm.go:604] restartCluster took 1m33.2446488s
	I0915 19:02:09.926642    7552 kubeadm.go:392] StartCluster complete in 1m33.4414873s
	I0915 19:02:09.927036    7552 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 19:02:09.927652    7552 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 19:02:09.929848    7552 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 19:02:10.017209    7552 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20210915185528-22848" rescaled to 1
	I0915 19:02:10.017209    7552 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
	I0915 19:02:10.022625    7552 out.go:177] * Verifying Kubernetes components...
	I0915 19:02:10.017209    7552 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 19:02:10.018293    7552 config.go:177] Loaded profile config "functional-20210915185528-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:02:10.018293    7552 addons.go:404] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0915 19:02:10.023208    7552 addons.go:65] Setting storage-provisioner=true in profile "functional-20210915185528-22848"
	I0915 19:02:10.023208    7552 addons.go:65] Setting default-storageclass=true in profile "functional-20210915185528-22848"
	I0915 19:02:10.023208    7552 addons.go:153] Setting addon storage-provisioner=true in "functional-20210915185528-22848"
	W0915 19:02:10.023208    7552 addons.go:165] addon storage-provisioner should already be in state true
	I0915 19:02:10.023208    7552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20210915185528-22848"
	I0915 19:02:10.023208    7552 host.go:66] Checking if "functional-20210915185528-22848" exists ...
	I0915 19:02:10.073047    7552 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 19:02:10.076100    7552 cli_runner.go:115] Run: docker container inspect functional-20210915185528-22848 --format={{.State.Status}}
	I0915 19:02:10.076100    7552 cli_runner.go:115] Run: docker container inspect functional-20210915185528-22848 --format={{.State.Status}}
	I0915 19:02:10.349993    7552 start.go:709] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0915 19:02:10.374608    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:02:10.972808    7552 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 19:02:10.973814    7552 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 19:02:10.973814    7552 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 19:02:10.988802    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:02:11.050791    7552 addons.go:153] Setting addon default-storageclass=true in "functional-20210915185528-22848"
	W0915 19:02:11.050791    7552 addons.go:165] addon default-storageclass should already be in state true
	I0915 19:02:11.050916    7552 host.go:66] Checking if "functional-20210915185528-22848" exists ...
	I0915 19:02:11.081681    7552 cli_runner.go:115] Run: docker container inspect functional-20210915185528-22848 --format={{.State.Status}}
	I0915 19:02:11.213121    7552 node_ready.go:35] waiting up to 6m0s for node "functional-20210915185528-22848" to be "Ready" ...
	I0915 19:02:11.237299    7552 node_ready.go:49] node "functional-20210915185528-22848" has status "Ready":"True"
	I0915 19:02:11.237436    7552 node_ready.go:38] duration metric: took 24.3158ms waiting for node "functional-20210915185528-22848" to be "Ready" ...
	I0915 19:02:11.237436    7552 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 19:02:11.275418    7552 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-kz448" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.321585    7552 pod_ready.go:92] pod "coredns-78fcd69978-kz448" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:11.321585    7552 pod_ready.go:81] duration metric: took 46.1677ms waiting for pod "coredns-78fcd69978-kz448" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.321585    7552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.345327    7552 pod_ready.go:92] pod "etcd-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:11.345327    7552 pod_ready.go:81] duration metric: took 23.7413ms waiting for pod "etcd-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.345327    7552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.410641    7552 pod_ready.go:92] pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:11.410641    7552 pod_ready.go:81] duration metric: took 65.3152ms waiting for pod "kube-apiserver-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.410852    7552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.490043    7552 pod_ready.go:92] pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:11.490043    7552 pod_ready.go:81] duration metric: took 79.1922ms waiting for pod "kube-controller-manager-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.490043    7552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-75lgx" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.633371    7552 pod_ready.go:92] pod "kube-proxy-75lgx" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:11.633371    7552 pod_ready.go:81] duration metric: took 143.3286ms waiting for pod "kube-proxy-75lgx" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.633504    7552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:11.783774    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:02:11.875640    7552 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 19:02:11.875640    7552 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 19:02:11.885521    7552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210915185528-22848
	I0915 19:02:12.029533    7552 pod_ready.go:92] pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 19:02:12.029533    7552 pod_ready.go:81] duration metric: took 396.031ms waiting for pod "kube-scheduler-functional-20210915185528-22848" in "kube-system" namespace to be "Ready" ...
	I0915 19:02:12.029533    7552 pod_ready.go:38] duration metric: took 792.1013ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 19:02:12.029533    7552 api_server.go:50] waiting for apiserver process to appear ...
	I0915 19:02:12.042590    7552 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 19:02:12.129789    7552 api_server.go:70] duration metric: took 2.1125938s to wait for apiserver process to appear ...
	I0915 19:02:12.129789    7552 api_server.go:86] waiting for apiserver healthz status ...
	I0915 19:02:12.129789    7552 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:55734/healthz ...
	I0915 19:02:12.135093    7552 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 19:02:12.200167    7552 api_server.go:265] https://127.0.0.1:55734/healthz returned 200:
	ok
	I0915 19:02:12.210634    7552 api_server.go:139] control plane version: v1.22.1
	I0915 19:02:12.210634    7552 api_server.go:129] duration metric: took 80.8453ms to wait for apiserver health ...
	I0915 19:02:12.210634    7552 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 19:02:12.260972    7552 system_pods.go:59] 7 kube-system pods found
	I0915 19:02:12.261067    7552 system_pods.go:61] "coredns-78fcd69978-kz448" [02c2eb54-ff80-44e8-8801-802e1dc5625f] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "etcd-functional-20210915185528-22848" [a5e1a1e8-31bc-44a8-8f69-78a87dd8bfeb] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "kube-apiserver-functional-20210915185528-22848" [b6c28e24-2401-4fe3-ab11-91ea44668cd1] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "kube-controller-manager-functional-20210915185528-22848" [344a80f2-afc1-414e-8446-e8eeff71ec5c] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "kube-proxy-75lgx" [f51bd18e-0ce7-4813-ba9e-d2eedb280750] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "kube-scheduler-functional-20210915185528-22848" [d7d0d620-55ad-4a82-994c-0f03657affe9] Running
	I0915 19:02:12.261067    7552 system_pods.go:61] "storage-provisioner" [0079c81b-04ae-439f-be17-1a1ba8697238] Running
	I0915 19:02:12.261067    7552 system_pods.go:74] duration metric: took 50.4335ms to wait for pod list to return data ...
	I0915 19:02:12.261067    7552 default_sa.go:34] waiting for default service account to be created ...
	I0915 19:02:12.440471    7552 default_sa.go:45] found service account: "default"
	I0915 19:02:12.440471    7552 default_sa.go:55] duration metric: took 179.4055ms for default service account to be created ...
	I0915 19:02:12.440471    7552 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 19:02:12.640992    7552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55730 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\functional-20210915185528-22848\id_rsa Username:docker}
	I0915 19:02:12.915710    7552 system_pods.go:86] 7 kube-system pods found
	I0915 19:02:12.915710    7552 system_pods.go:89] "coredns-78fcd69978-kz448" [02c2eb54-ff80-44e8-8801-802e1dc5625f] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "etcd-functional-20210915185528-22848" [a5e1a1e8-31bc-44a8-8f69-78a87dd8bfeb] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "kube-apiserver-functional-20210915185528-22848" [b6c28e24-2401-4fe3-ab11-91ea44668cd1] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "kube-controller-manager-functional-20210915185528-22848" [344a80f2-afc1-414e-8446-e8eeff71ec5c] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "kube-proxy-75lgx" [f51bd18e-0ce7-4813-ba9e-d2eedb280750] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "kube-scheduler-functional-20210915185528-22848" [d7d0d620-55ad-4a82-994c-0f03657affe9] Running
	I0915 19:02:12.915710    7552 system_pods.go:89] "storage-provisioner" [0079c81b-04ae-439f-be17-1a1ba8697238] Running
	I0915 19:02:12.915710    7552 system_pods.go:126] duration metric: took 475.2417ms to wait for k8s-apps to be running ...
	I0915 19:02:12.915710    7552 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 19:02:12.927708    7552 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 19:02:12.940850    7552 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 19:02:13.586743    7552 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4516597s)
	I0915 19:02:13.587152    7552 system_svc.go:56] duration metric: took 671.4467ms WaitForService to wait for kubelet.
	I0915 19:02:13.587152    7552 kubeadm.go:547] duration metric: took 3.5699665s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0915 19:02:13.587152    7552 node_conditions.go:102] verifying NodePressure condition ...
	I0915 19:02:13.600877    7552 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 19:02:13.600877    7552 node_conditions.go:123] node cpu capacity is 4
	I0915 19:02:13.600999    7552 node_conditions.go:105] duration metric: took 13.8464ms to run NodePressure ...
	I0915 19:02:13.600999    7552 start.go:231] waiting for startup goroutines ...
	I0915 19:02:13.764631    7552 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0915 19:02:13.765075    7552 addons.go:406] enableAddons completed in 3.7478904s
	I0915 19:02:13.968022    7552 start.go:462] kubectl: 1.20.0, cluster: 1.22.1 (minor skew: 2)
	I0915 19:02:13.970948    7552 out.go:177] 
	W0915 19:02:13.970948    7552 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1.
	I0915 19:02:13.974274    7552 out.go:177]   - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
	I0915 19:02:13.976510    7552 out.go:177] * Done! kubectl is now configured to use "functional-20210915185528-22848" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-09-15 18:55:46 UTC, end at Wed 2021-09-15 19:03:53 UTC. --
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.049864000Z" level=info msg="ignoring event" container=5834041e16abd18f408b1ab85db4e1c23e54fda10acbab2d307ac5f00f659f32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.062282800Z" level=info msg="ignoring event" container=d8a56d9a35cca3a0b472ffc534cac42f7c7df77382a63c9a179e91cb01594563 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.075009400Z" level=info msg="ignoring event" container=0aa327ea6855b5b3f619f974024c63cf84a2a8ae192ac517c4ed31ec4527a7e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.162029100Z" level=info msg="ignoring event" container=4c237c5570c4f7191769a02bfb3ef30fd145a7fc3a7e8525125b337fbb8978e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.194706500Z" level=info msg="ignoring event" container=2d654b63f5b361593ef8fa5faebcfb2459725712489a515cc0eb1f4575744f7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.251922500Z" level=info msg="ignoring event" container=a49d3bd8af1ba1d317e3b4eb6709f2db4f6d688ca60b8fe7abc31d2a544ef508 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.251989500Z" level=info msg="ignoring event" container=be181f506c70f38c1042fe26447d8e2a1beb6fd0283ff0e0af43a6f612b001d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:40 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:40.277685000Z" level=info msg="ignoring event" container=1efadde39b35091fd4fac9747511fc59cb2e648594d8c1220bffc056d799ffec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:41 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:41.295484600Z" level=info msg="ignoring event" container=08710a1f70a231dc86b7c004de084d7961598acc5dbcca4e2a7b54faefaabad7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:43 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:43.273852500Z" level=info msg="ignoring event" container=6cb84dfa0246b3ae2d1d2faacc3171bfaf95046a870717df47201f7643e5d447 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:44 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:44.484275400Z" level=info msg="ignoring event" container=6c33ffd51974d25a996b91ead83935203ab440228e760a3aa4c0aec26d6b2c2c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:46 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:46.155724200Z" level=error msg="Handler for GET /v1.41/containers/5ccb2f65220cfc69b1d4149f4d9bdc333429077d5d31b3f94823b56050843cde/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	Sep 15 19:00:46 functional-20210915185528-22848 dockerd[784]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	Sep 15 19:00:46 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:46.469484900Z" level=error msg="Handler for GET /v1.41/containers/017ee5ddc3ff85d70ea27ea66e42a2837eb5d0f9df15d654caf1becc14ceee98/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	Sep 15 19:00:46 functional-20210915185528-22848 dockerd[784]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	Sep 15 19:00:56 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:56.672381300Z" level=info msg="ignoring event" container=5ccb2f65220cfc69b1d4149f4d9bdc333429077d5d31b3f94823b56050843cde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:00:57 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:00:57.898431700Z" level=info msg="ignoring event" container=3b151ad17508bd4b4d40972728559d1dc07630166e5bc81fd350ff346fdadf9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:01 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:01.470145500Z" level=info msg="ignoring event" container=e853dc1cb291f60a22b0fd2ca3cc37033199a48f71d9d8407ccb557b582e195d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:09 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:09.864009400Z" level=info msg="ignoring event" container=bb0612c11b265cfaab181f5cb92ed5bc9b8f2ee51f237fc8f64c1b36f1bdab70 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:11 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:11.584011600Z" level=info msg="ignoring event" container=e76ac171af93005fecd29e3f7e2302617292e1eba41e7a424bdcb4db5ef84c69 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:11 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:11.891326400Z" level=info msg="ignoring event" container=2c36a50abcdd71f2a2295a921b432ea0f2748f475b950ceed6376569449fa6e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:13 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:13.577154400Z" level=info msg="ignoring event" container=e8b4bda89d6ccfe8f6f258c7933ae040134aec66ba18da19e85413b6df5bb305 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:01:32 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:01:32.963534600Z" level=info msg="ignoring event" container=20d45c87c622b52056f8e55ae3e0ee1a54d32bdf3f2599543f29c344e910a66c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:03:27 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:03:27.309791500Z" level=info msg="ignoring event" container=dc153b479256cdf3480938a57cfdddea9abd7de5e89037c6ecd6c2c6055d385f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 19:03:28 functional-20210915185528-22848 dockerd[784]: time="2021-09-15T19:03:28.309703300Z" level=info msg="Layer sha256:12611729abe769f611aa754a4734cecf22ff5cd0ffd1bb14d56dd4dbef61d809 cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	8e9a0cbbb3ccd       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   39 seconds ago      Running             echoserver                0                   0e748e18ddb7f
	d1d91286322e2       6e002eb89a881                                                                                   2 minutes ago       Running             kube-controller-manager   3                   3e6483cb8d4fb
	fadd5d0d97af5       6e38f40d628db                                                                                   2 minutes ago       Running             storage-provisioner       3                   7aa1be217e548
	805d322a477a0       f30469a2491a5                                                                                   2 minutes ago       Running             kube-apiserver            2                   a2a07058d2af8
	be25c29f97536       8d147537fb7d1                                                                                   2 minutes ago       Running             coredns                   1                   61be9a2057caf
	20d45c87c622b       6e002eb89a881                                                                                   2 minutes ago       Exited              kube-controller-manager   2                   3e6483cb8d4fb
	e8b4bda89d6cc       6e38f40d628db                                                                                   2 minutes ago       Exited              storage-provisioner       2                   7aa1be217e548
	8d529f0526c5e       36c4ebbc9d979                                                                                   2 minutes ago       Running             kube-proxy                1                   75c8ec4975ff6
	e853dc1cb291f       f30469a2491a5                                                                                   2 minutes ago       Exited              kube-apiserver            1                   a2a07058d2af8
	2d74646aaaa24       aca5ededae9c8                                                                                   2 minutes ago       Running             kube-scheduler            1                   017ee5ddc3ff8
	66f42af741b46       0048118155842                                                                                   2 minutes ago       Running             etcd                      1                   35e564600f444
	6c33ffd51974d       8d147537fb7d1                                                                                   5 minutes ago       Exited              coredns                   0                   0aa327ea6855b
	5834041e16abd       36c4ebbc9d979                                                                                   5 minutes ago       Exited              kube-proxy                0                   a49d3bd8af1ba
	6cb84dfa0246b       aca5ededae9c8                                                                                   5 minutes ago       Exited              kube-scheduler            0                   4c237c5570c4f
	2d654b63f5b36       0048118155842                                                                                   5 minutes ago       Exited              etcd                      0                   d8a56d9a35cca
	
	* 
	* ==> coredns [6c33ffd51974] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [be25c29f9753] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20210915185528-22848
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20210915185528-22848
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d321606059ead2904f4f5ddd59a9a7026c7ee04
	                    minikube.k8s.io/name=functional-20210915185528-22848
	                    minikube.k8s.io/updated_at=2021_09_15T18_58_23_0700
	                    minikube.k8s.io/version=v1.23.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Sep 2021 18:58:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20210915185528-22848
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 15 Sep 2021 19:03:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Sep 2021 19:03:29 +0000   Wed, 15 Sep 2021 18:58:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Sep 2021 19:03:29 +0000   Wed, 15 Sep 2021 18:58:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Sep 2021 19:03:29 +0000   Wed, 15 Sep 2021 18:58:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 15 Sep 2021 19:03:29 +0000   Wed, 15 Sep 2021 18:58:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20210915185528-22848
	Capacity:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	Allocatable:
	  cpu:                4
	  ephemeral-storage:  65792556Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             20481980Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b5e5cdd53d44f5ab575bb522d42acca
	  System UUID:                bfeeb012-cea8-4229-b7d2-e375dd0bea17
	  Boot ID:                    7b7b18db-3e3e-49d3-a2cb-ac38329b7bd9
	  Kernel Version:             4.19.121-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.8
	  Kubelet Version:            v1.22.1
	  Kube-Proxy Version:         v1.22.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6cbfcd7cbc-6zxnx                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 coredns-78fcd69978-kz448                                   100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     5m21s
	  kube-system                 etcd-functional-20210915185528-22848                       100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m36s
	  kube-system                 kube-apiserver-functional-20210915185528-22848             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-controller-manager-functional-20210915185528-22848    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 kube-proxy-75lgx                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-scheduler-functional-20210915185528-22848             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (18%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From     Message
	  ----    ------                   ----                   ----     -------
	  Normal  NodeHasSufficientMemory  5m54s (x8 over 5m56s)  kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s (x8 over 5m56s)  kubelet  Node functional-20210915185528-22848 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s (x7 over 5m56s)  kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m31s                  kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m30s                  kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s                  kubelet  Node functional-20210915185528-22848 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s                  kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             5m30s                  kubelet  Node functional-20210915185528-22848 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  5m29s                  kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m20s                  kubelet  Node functional-20210915185528-22848 status is now: NodeReady
	  Normal  Starting                 3m4s                   kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m2s                   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m1s (x8 over 3m3s)    kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x8 over 3m3s)    kubelet  Node functional-20210915185528-22848 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x7 over 3m3s)    kubelet  Node functional-20210915185528-22848 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.000000]  hrtimer_interrupt+0x92/0x165
	[  +0.000000]  hv_stimer0_isr+0x20/0x2d
	[  +0.000000]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000000]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000000]  </IRQ>
	[  +0.000000] RIP: 0010:arch_local_irq_enable+0x7/0x8
	[  +0.000000] Code: ef ff ff 0f 20 d8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 fb 66 0f 1f 44 00 00 <c3> 0f 1f 44 00 00 40 f6 c7 02 74 12 48 b8 ff 0f 00 00 00 00 f0 ff
	[  +0.000000] RSP: 0000:ffffbcaf423f7ee0 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff12
	[  +0.000000] RAX: 0000000080000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000000] RDX: 000055a9735499db RSI: 0000000000000004 RDI: ffffbcaf423f7f58
	[  +0.000000] RBP: ffffbcaf423f7f58 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000004
	[  +0.000000] R13: 000055a9735499db R14: ffff97d483b18dc0 R15: ffff97d4e4dc7400
	[  +0.000000]  __do_page_fault+0x17f/0x42d
	[  +0.000000]  ? page_fault+0x8/0x30
	[  +0.000000]  page_fault+0x1e/0x30
	[  +0.000000] RIP: 0033:0x55a9730c8f03
	[  +0.000000] Code: 0f 6f d9 66 0f ef 0d ec 85 97 00 66 0f ef 15 f4 85 97 00 66 0f ef 1d fc 85 97 00 66 0f 38 dc c9 66 0f 38 dc d2 66 0f 38 dc db <f3> 0f 6f 20 f3 0f 6f 68 10 f3 0f 6f 74 08 e0 f3 0f 6f 7c 08 f0 66
	[  +0.000000] RSP: 002b:000000c00004bdc8 EFLAGS: 00010287
	[  +0.000000] RAX: 000055a9735499db RBX: 000055a9730cb860 RCX: 0000000000000022
	[  +0.000000] RDX: 000000c00004bde0 RSI: 000000c00004be48 RDI: 000000c000080868
	[  +0.000000] RBP: 000000c00004be28 R08: 000055a97353d681 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000004 R11: 000000c0000807d0 R12: 000000000000001a
	[  +0.000000] R13: 0000000000000006 R14: 0000000000000008 R15: 0000000000000017
	[  +0.000000] ---[ end trace cdbbbbc925f6eff0 ]---
	
	* 
	* ==> etcd [2d654b63f5b3] <==
	* {"level":"info","ts":"2021-09-15T18:58:05.580Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20210915185528-22848 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-09-15T18:58:05.581Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T18:58:05.581Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T18:58:05.603Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-09-15T18:58:05.603Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-09-15T18:58:05.604Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T18:58:05.604Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-09-15T18:58:05.654Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T18:58:05.654Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T18:58:05.654Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T18:58:05.780Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2021-09-15T18:58:34.368Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.4976ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T18:58:34.369Z","caller":"traceutil/trace.go:171","msg":"trace[1410007702] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:393; }","duration":"115.4215ms","start":"2021-09-15T18:58:34.253Z","end":"2021-09-15T18:58:34.369Z","steps":["trace[1410007702] 'agreement among raft nodes before linearized reading'  (duration: 42.2033ms)","trace[1410007702] 'range keys from in-memory index tree'  (duration: 71.2858ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T18:58:34.369Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"174.1406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:269"}
	{"level":"info","ts":"2021-09-15T18:58:34.369Z","caller":"traceutil/trace.go:171","msg":"trace[1889322249] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:393; }","duration":"174.1931ms","start":"2021-09-15T18:58:34.195Z","end":"2021-09-15T18:58:34.369Z","steps":["trace[1889322249] 'agreement among raft nodes before linearized reading'  (duration: 100.869ms)","trace[1889322249] 'range keys from in-memory index tree'  (duration: 73.1343ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T18:58:34.370Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"174.6669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" ","response":"range_response_count:1 size:245"}
	{"level":"info","ts":"2021-09-15T18:58:34.370Z","caller":"traceutil/trace.go:171","msg":"trace[1339931177] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:393; }","duration":"174.7143ms","start":"2021-09-15T18:58:34.195Z","end":"2021-09-15T18:58:34.370Z","steps":["trace[1339931177] 'agreement among raft nodes before linearized reading'  (duration: 100.8296ms)","trace[1339931177] 'range keys from in-memory index tree'  (duration: 73.8143ms)"],"step_count":2}
	{"level":"info","ts":"2021-09-15T19:00:38.368Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2021-09-15T19:00:38.369Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20210915185528-22848","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2021/09/15 19:00:38 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2021/09/15 19:00:38 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2021-09-15T19:00:38.556Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2021-09-15T19:00:38.565Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T19:00:38.567Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T19:00:38.567Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20210915185528-22848","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [66f42af741b4] <==
	* {"level":"info","ts":"2021-09-15T19:00:55.769Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-09-15T19:00:55.781Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-id":"fa54960ea34d58be","cluster-version":"3.5"}
	{"level":"info","ts":"2021-09-15T19:00:55.782Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-09-15T19:00:55.783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-09-15T19:00:55.784Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-09-15T19:00:55.788Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-09-15T19:00:55.801Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-09-15T19:00:55.802Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T19:00:55.802Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-09-15T19:00:55.803Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-09-15T19:00:55.802Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2021-09-15T19:00:56.379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-09-15T19:00:56.385Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20210915185528-22848 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-09-15T19:00:56.385Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T19:00:56.387Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-09-15T19:00:56.391Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-09-15T19:00:56.391Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-09-15T19:00:56.413Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-09-15T19:00:56.455Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:03:55 up 39 min,  0 users,  load average: 3.03, 3.11, 4.62
	Linux functional-20210915185528-22848 4.19.121-linuxkit #1 SMP Thu Jan 21 15:36:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [805d322a477a] <==
	* I0915 19:01:32.127710       1 apf_controller.go:299] Starting API Priority and Fairness config controller
	E0915 19:01:32.161487       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0915 19:01:32.661418       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0915 19:01:32.664156       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0915 19:01:32.664242       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0915 19:01:32.673081       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 19:01:32.780860       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0915 19:01:32.780882       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0915 19:01:32.780894       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0915 19:01:32.781088       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0915 19:01:32.781106       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 19:01:32.799557       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0915 19:01:32.801276       1 cache.go:39] Caches are synced for autoregister controller
	I0915 19:01:33.153300       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0915 19:01:33.153542       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0915 19:01:33.165458       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0915 19:01:36.104410       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0915 19:01:36.177303       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0915 19:01:36.312530       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0915 19:01:36.362339       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 19:01:36.381453       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 19:01:50.819894       1 controller.go:611] quota admission added evaluator for: endpoints
	I0915 19:02:07.465764       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 19:02:35.008724       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0915 19:02:35.206593       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-apiserver [e853dc1cb291] <==
	* I0915 19:01:01.174718       1 server.go:553] external host was not specified, using 192.168.49.2
	I0915 19:01:01.177175       1 server.go:161] Version: v1.22.1
	Error: failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use
	
	* 
	* ==> kube-controller-manager [20d45c87c622] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xbe
	crypto/tls.(*Conn).readFromUntil(0xc000259500, 0x5176ac0, 0xc000186b80, 0x5, 0xc000186b80, 0x400)
		/usr/local/go/src/crypto/tls/conn.go:798 +0xf3
	crypto/tls.(*Conn).readRecordOrCCS(0xc000259500, 0x0, 0x0, 0x3)
		/usr/local/go/src/crypto/tls/conn.go:605 +0x115
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:573
	crypto/tls.(*Conn).Read(0xc000259500, 0xc000036000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:1276 +0x165
	bufio.(*Reader).Read(0xc0000d26c0, 0xc000fe43b8, 0x9, 0x9, 0x99f88b, 0xc0008b1c78, 0x4071a5)
		/usr/local/go/src/bufio/bufio.go:227 +0x222
	io.ReadAtLeast(0x516f400, 0xc0000d26c0, 0xc000fe43b8, 0x9, 0x9, 0x9, 0xc000da6010, 0x6591048f045b00, 0xc000da6010)
		/usr/local/go/src/io/io.go:328 +0x87
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000fe43b8, 0x9, 0x9, 0x516f400, 0xc0000d26c0, 0x0, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000fe4380, 0xc000d6c1b0, 0x0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0008b1fa8, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1821 +0xd8
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc00009fc80)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1743 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:695 +0x6c5
	
	* 
	* ==> kube-controller-manager [d1d91286322e] <==
	* I0915 19:02:07.388766       1 shared_informer.go:247] Caches are synced for stateful set 
	I0915 19:02:07.392196       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0915 19:02:07.396084       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0915 19:02:07.397658       1 shared_informer.go:247] Caches are synced for attach detach 
	I0915 19:02:07.404260       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0915 19:02:07.405895       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0915 19:02:07.458281       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0915 19:02:07.460651       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0915 19:02:07.461985       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0915 19:02:07.464484       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0915 19:02:07.473637       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0915 19:02:07.474003       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 19:02:07.474155       1 shared_informer.go:247] Caches are synced for expand 
	I0915 19:02:07.475822       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 19:02:07.477352       1 shared_informer.go:247] Caches are synced for disruption 
	I0915 19:02:07.479753       1 disruption.go:371] Sending events to api server.
	I0915 19:02:07.481935       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0915 19:02:07.495929       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0915 19:02:07.496277       1 shared_informer.go:247] Caches are synced for deployment 
	I0915 19:02:07.497268       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0915 19:02:07.879303       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 19:02:07.954938       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 19:02:07.954971       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0915 19:02:35.037113       1 event.go:291] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-6cbfcd7cbc to 1"
	I0915 19:02:35.138748       1 event.go:291] "Event occurred" object="default/hello-node-6cbfcd7cbc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-6cbfcd7cbc-6zxnx"
	
	* 
	* ==> kube-proxy [5834041e16ab] <==
	* I0915 18:58:41.383253       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0915 18:58:41.383395       1 server_others.go:140] Detected node IP 192.168.49.2
	W0915 18:58:41.384238       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 18:58:41.878350       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 18:58:41.878431       1 server_others.go:212] Using iptables Proxier.
	I0915 18:58:41.878882       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 18:58:41.878943       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 18:58:41.887684       1 server.go:649] Version: v1.22.1
	I0915 18:58:41.902017       1 config.go:315] Starting service config controller
	I0915 18:58:41.902056       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0915 18:58:41.902096       1 config.go:224] Starting endpoint slice config controller
	I0915 18:58:41.902108       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	E0915 18:58:41.997535       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"functional-20210915185528-22848.16a513e6c7771690", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048ae7875bc8204, ext:1624351101, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-functional-20210915185528-22848", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Na
me:"functional-20210915185528-22848", UID:"functional-20210915185528-22848", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "functional-20210915185528-22848.16a513e6c7771690" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 18:58:42.003014       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0915 18:58:42.003116       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [8d529f0526c5] <==
	* E0915 19:01:13.690897       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915185528-22848": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 19:01:14.863688       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915185528-22848": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 19:01:17.181305       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915185528-22848": dial tcp 192.168.49.2:8441: connect: connection refused
	E0915 19:01:21.531488       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210915185528-22848": dial tcp 192.168.49.2:8441: connect: connection refused
	I0915 19:01:33.199178       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0915 19:01:33.199262       1 server_others.go:140] Detected node IP 192.168.49.2
	W0915 19:01:33.256597       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 19:01:33.995613       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 19:01:33.996242       1 server_others.go:212] Using iptables Proxier.
	I0915 19:01:33.996778       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 19:01:33.997251       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 19:01:33.999227       1 server.go:649] Version: v1.22.1
	I0915 19:01:34.092829       1 config.go:315] Starting service config controller
	I0915 19:01:34.092862       1 shared_informer.go:240] Waiting for caches to sync for service config
	E0915 19:01:34.102089       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"functional-20210915185528-22848.16a5140ede1ef91c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048aea384c9b860, ext:21100838601, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-functional-20210915185528-22848", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", N
ame:"functional-20210915185528-22848", UID:"functional-20210915185528-22848", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "functional-20210915185528-22848.16a5140ede1ef91c" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 19:01:34.103489       1 config.go:224] Starting endpoint slice config controller
	I0915 19:01:34.104349       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0915 19:01:34.104861       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0915 19:01:34.203877       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [2d74646aaaa2] <==
	* I0915 19:00:59.560769       1 serving.go:347] Generated self-signed cert in-memory
	W0915 19:01:08.968839       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 19:01:08.969062       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 19:01:08.969079       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 19:01:08.969090       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 19:01:09.200800       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 19:01:09.201036       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 19:01:09.201040       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0915 19:01:09.200210       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0915 19:01:09.303722       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0915 19:01:32.489360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0915 19:01:32.489479       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0915 19:01:32.489551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0915 19:01:32.489828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0915 19:01:32.489871       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0915 19:01:32.489918       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0915 19:01:32.490169       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	E0915 19:01:32.493289       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0915 19:01:32.493360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0915 19:01:32.493418       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0915 19:01:32.493509       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	E0915 19:01:32.493569       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0915 19:01:32.494127       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	
	* 
	* ==> kube-scheduler [6cb84dfa0246] <==
	* E0915 18:58:16.702606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:16.753258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:16.753347       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 18:58:17.592458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 18:58:17.653589       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 18:58:17.657854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:17.700181       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 18:58:17.752938       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 18:58:17.757490       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 18:58:17.768191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:17.807374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 18:58:17.853899       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:17.855934       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 18:58:17.987959       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 18:58:18.003480       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 18:58:18.270353       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 18:58:18.320249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 18:58:18.371434       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 18:58:20.085494       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 18:58:20.085598       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0915 18:58:20.237097       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0915 18:58:20.457296       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0915 19:00:39.054781       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 19:00:39.054953       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0915 19:00:39.055007       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-09-15 18:55:46 UTC, end at Wed 2021-09-15 19:03:58 UTC. --
	Sep 15 19:01:23 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:23.394915    5947 scope.go:110] "RemoveContainer" containerID="e853dc1cb291f60a22b0fd2ca3cc37033199a48f71d9d8407ccb557b582e195d"
	Sep 15 19:01:26 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:26.398457    5947 scope.go:110] "RemoveContainer" containerID="e8b4bda89d6ccfe8f6f258c7933ae040134aec66ba18da19e85413b6df5bb305"
	Sep 15 19:01:32 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:32.372839    5947 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Sep 15 19:01:32 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:32.373804    5947 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Sep 15 19:01:32 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:32.376535    5947 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Sep 15 19:01:34 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:34.384531    5947 scope.go:110] "RemoveContainer" containerID="bb0612c11b265cfaab181f5cb92ed5bc9b8f2ee51f237fc8f64c1b36f1bdab70"
	Sep 15 19:01:34 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:34.385242    5947 scope.go:110] "RemoveContainer" containerID="20d45c87c622b52056f8e55ae3e0ee1a54d32bdf3f2599543f29c344e910a66c"
	Sep 15 19:01:34 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:34.386204    5947 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20210915185528-22848_kube-system(38ad05134b9074b92e81105a83b60d33)\"" pod="kube-system/kube-controller-manager-functional-20210915185528-22848" podUID=38ad05134b9074b92e81105a83b60d33
	Sep 15 19:01:39 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:39.260833    5947 scope.go:110] "RemoveContainer" containerID="20d45c87c622b52056f8e55ae3e0ee1a54d32bdf3f2599543f29c344e910a66c"
	Sep 15 19:01:39 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:39.263564    5947 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20210915185528-22848_kube-system(38ad05134b9074b92e81105a83b60d33)\"" pod="kube-system/kube-controller-manager-functional-20210915185528-22848" podUID=38ad05134b9074b92e81105a83b60d33
	Sep 15 19:01:40 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:40.578660    5947 scope.go:110] "RemoveContainer" containerID="20d45c87c622b52056f8e55ae3e0ee1a54d32bdf3f2599543f29c344e910a66c"
	Sep 15 19:01:40 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:40.580903    5947 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20210915185528-22848_kube-system(38ad05134b9074b92e81105a83b60d33)\"" pod="kube-system/kube-controller-manager-functional-20210915185528-22848" podUID=38ad05134b9074b92e81105a83b60d33
	Sep 15 19:01:52 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:52.458719    5947 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/e5714774789c86fd1bc99a5360bb140d97991147233748e0fe32067a9d75a2a4/diff" to get inode usage: stat /var/lib/docker/overlay2/e5714774789c86fd1bc99a5360bb140d97991147233748e0fe32067a9d75a2a4/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/e76ac171af93005fecd29e3f7e2302617292e1eba41e7a424bdcb4db5ef84c69" to get inode usage: stat /var/lib/docker/containers/e76ac171af93005fecd29e3f7e2302617292e1eba41e7a424bdcb4db5ef84c69: no such file or directory
	Sep 15 19:01:52 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:52.508695    5947 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/9494f7ee0cd76da476eb91a91e6c3afb07ef14c1a2ba6d7fd4a4513a88d7b7c7/diff" to get inode usage: stat /var/lib/docker/overlay2/9494f7ee0cd76da476eb91a91e6c3afb07ef14c1a2ba6d7fd4a4513a88d7b7c7/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/bb0612c11b265cfaab181f5cb92ed5bc9b8f2ee51f237fc8f64c1b36f1bdab70" to get inode usage: stat /var/lib/docker/containers/bb0612c11b265cfaab181f5cb92ed5bc9b8f2ee51f237fc8f64c1b36f1bdab70: no such file or directory
	Sep 15 19:01:52 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:52.679108    5947 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/6ef2bb616291753a4a08544efdf73c7d0e6409f734d52a2c8342540516880fac/diff" to get inode usage: stat /var/lib/docker/overlay2/6ef2bb616291753a4a08544efdf73c7d0e6409f734d52a2c8342540516880fac/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/2c36a50abcdd71f2a2295a921b432ea0f2748f475b950ceed6376569449fa6e6" to get inode usage: stat /var/lib/docker/containers/2c36a50abcdd71f2a2295a921b432ea0f2748f475b950ceed6376569449fa6e6: no such file or directory
	Sep 15 19:01:52 functional-20210915185528-22848 kubelet[5947]: E0915 19:01:52.682770    5947 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/64d9684555b7151aa1230886a330403a1719214b8e7de95bf163ab366792ee0a/diff" to get inode usage: stat /var/lib/docker/overlay2/64d9684555b7151aa1230886a330403a1719214b8e7de95bf163ab366792ee0a/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/5ccb2f65220cfc69b1d4149f4d9bdc333429077d5d31b3f94823b56050843cde" to get inode usage: stat /var/lib/docker/containers/5ccb2f65220cfc69b1d4149f4d9bdc333429077d5d31b3f94823b56050843cde: no such file or directory
	Sep 15 19:01:52 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:52.888701    5947 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-kz448 through plugin: invalid network status for"
	Sep 15 19:01:53 functional-20210915185528-22848 kubelet[5947]: I0915 19:01:53.396219    5947 scope.go:110] "RemoveContainer" containerID="20d45c87c622b52056f8e55ae3e0ee1a54d32bdf3f2599543f29c344e910a66c"
	Sep 15 19:02:35 functional-20210915185528-22848 kubelet[5947]: I0915 19:02:35.267866    5947 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 19:02:35 functional-20210915185528-22848 kubelet[5947]: I0915 19:02:35.517667    5947 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdvqt\" (UniqueName: \"kubernetes.io/projected/5db4631a-6341-4ed5-b14f-a19ecbbf28a8-kube-api-access-mdvqt\") pod \"hello-node-6cbfcd7cbc-6zxnx\" (UID: \"5db4631a-6341-4ed5-b14f-a19ecbbf28a8\") "
	Sep 15 19:02:38 functional-20210915185528-22848 kubelet[5947]: I0915 19:02:38.489966    5947 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-6zxnx through plugin: invalid network status for"
	Sep 15 19:02:38 functional-20210915185528-22848 kubelet[5947]: I0915 19:02:38.497385    5947 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0e748e18ddb7fd23242d988eddb7ffcc92b7e25185da03c987647ee05e83a159"
	Sep 15 19:02:39 functional-20210915185528-22848 kubelet[5947]: I0915 19:02:39.537819    5947 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-6zxnx through plugin: invalid network status for"
	Sep 15 19:03:15 functional-20210915185528-22848 kubelet[5947]: I0915 19:03:15.497389    5947 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-6zxnx through plugin: invalid network status for"
	Sep 15 19:03:17 functional-20210915185528-22848 kubelet[5947]: I0915 19:03:17.673707    5947 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-6cbfcd7cbc-6zxnx through plugin: invalid network status for"
	
	* 
	* ==> storage-provisioner [e8b4bda89d6c] <==
	* I0915 19:01:13.158761       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0915 19:01:13.168482       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [fadd5d0d97af] <==
	* I0915 19:01:27.299909       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 19:01:33.258101       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 19:01:33.258235       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 19:01:50.826826       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 19:01:50.827584       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ad39bb8-bb45-4cbe-ab94-bc169d169a59", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210915185528-22848_4cc74288-3ad3-4030-8969-9352d5ace344 became leader
	I0915 19:01:50.828838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210915185528-22848_4cc74288-3ad3-4030-8969-9352d5ace344!
	I0915 19:01:50.929360       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20210915185528-22848_4cc74288-3ad3-4030-8969-9352d5ace344!
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect functional-20210915185528-22848 --format={{.State.Status}}" took an unusually long time: 2.157805s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20210915185528-22848 -n functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20210915185528-22848 -n functional-20210915185528-22848: (5.9584474s)
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20210915185528-22848 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestFunctional/parallel/LoadImageFromFile]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20210915185528-22848 describe pod 

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context functional-20210915185528-22848 describe pod : exit status 1 (303.1866ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context functional-20210915185528-22848 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/LoadImageFromFile (41.35s)

                                                
                                    
x
+
TestScheduledStopWindows (250.39s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20210915195659-22848 --memory=2048 --driver=docker
E0915 19:57:35.677017   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:58:32.265355   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 19:59:55.360675   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:129: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20210915195659-22848 --memory=2048 --driver=docker: (3m11.9398822s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20210915195659-22848 --schedule 5m
scheduled_stop_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20210915195659-22848 --schedule 5m: (6.6006179s)
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20210915195659-22848 -n scheduled-stop-20210915195659-22848
scheduled_stop_test.go:192: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20210915195659-22848 -n scheduled-stop-20210915195659-22848: (5.3198738s)
scheduled_stop_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20210915195659-22848 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20210915195659-22848 -- sudo systemctl show minikube-scheduled-stop --no-page: (4.5800142s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20210915195659-22848 --schedule 5s
scheduled_stop_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20210915195659-22848 --schedule 5s: (4.408135s)
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-20210915195659-22848
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-20210915195659-22848: exit status 3 (5.4063263s)

                                                
                                                
-- stdout --
	scheduled-stop-20210915195659-22848
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect scheduled-stop-20210915195659-22848 --format={{.State.Status}}" took an unusually long time: 2.1166773s
	* Restarting the docker service may improve performance.
	E0915 20:00:53.183685   83032 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0915 20:00:53.183685   83032 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
scheduled_stop_test.go:210: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210915195659-22848
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect scheduled-stop-20210915195659-22848 --format={{.State.Status}}" took an unusually long time: 2.1166773s
	* Restarting the docker service may improve performance.
	E0915 20:00:53.183685   83032 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0915 20:00:53.183685   83032 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
panic.go:642: *** TestScheduledStopWindows FAILED at 2021-09-15 20:00:53.2271962 +0000 GMT m=+5501.593082801
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopWindows]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210915195659-22848
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210915195659-22848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e9e2cf00b47a71e7b7370f5cd9071bd75ddbabb9779dd95b88a6dc2612ecb3d",
	        "Created": "2021-09-15T19:57:14.0999198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2021-09-15T19:57:15.8900707Z",
	            "FinishedAt": "2021-09-15T20:00:50.6537898Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/4e9e2cf00b47a71e7b7370f5cd9071bd75ddbabb9779dd95b88a6dc2612ecb3d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e9e2cf00b47a71e7b7370f5cd9071bd75ddbabb9779dd95b88a6dc2612ecb3d/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e9e2cf00b47a71e7b7370f5cd9071bd75ddbabb9779dd95b88a6dc2612ecb3d/hosts",
	        "LogPath": "/var/lib/docker/containers/4e9e2cf00b47a71e7b7370f5cd9071bd75ddbabb9779dd95b88a6dc2612ecb3d/4e9e2cf00b47a71e7b7370f5cd9071bd75ddbabb9779dd95b88a6dc2612ecb3d-json.log",
	        "Name": "/scheduled-stop-20210915195659-22848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210915195659-22848:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210915195659-22848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5af7e608f3e7ce63891da2570f6c83bf87e7fac78d49005b5b6a2b2315ec4e1b-init/diff:/var/lib/docker/overlay2/a259804ff45c264548e9459111f8eb7e789339b3253b50b62afde896e9e19e34/diff:/var/lib/docker/overlay2/61882a81480713e64bf02bef67583a0609b2be0589d08187547a88789584af86/diff:/var/lib/docker/overlay2/a41d1f5e24156c1d438fe25c567f3c3492d15cb77b1bf5545be9086be845138a/diff:/var/lib/docker/overlay2/86e30e10438032d0a02b54850ad0316347488f3d5b831234af1e91f943269850/diff:/var/lib/docker/overlay2/f6962936c0c1b0636454847e8e963a472786602e15a00d5e020827c2372acfce/diff:/var/lib/docker/overlay2/5eee83c6029359aefecbba85cc6d456e3a5a97c3ef6e9f4850e8a53c62b30ef5/diff:/var/lib/docker/overlay2/fdaa4e134ab960962e0a388adaa3a6aa59dd139cc016dfd4cdf4565bc80e8469/diff:/var/lib/docker/overlay2/9e1b9be7e17136fa81b0a224e2fab9704d3234ca119d87c14f9a676bbdb023f5/diff:/var/lib/docker/overlay2/ffe06185e93cb7ae8d48d84ea9be8817f2ae3d2aae85114ce41477579e23debd/diff:/var/lib/docker/overlay2/221713
20a621ffe79c2acb0c13308b1b0cd3bc94a4083992e7b8589b820c625c/diff:/var/lib/docker/overlay2/eb2fb3ccafd6cb1c26a9642601357b3e0563e9e9361a5ab359bf1af592a0d709/diff:/var/lib/docker/overlay2/6081368e802a14f6f6a7424eb7af3f5f29f85bf59ed0a0709ce25b53738095cb/diff:/var/lib/docker/overlay2/fd7176e5912a824a0543fa3ab5170921538a287401ff8a451c90e1ef0fd8adea/diff:/var/lib/docker/overlay2/eec5078968f5e7332ff82191a780be0efef38aef75ea7cd67723ab3d2760c281/diff:/var/lib/docker/overlay2/d18d41a44c04cb695c4b69ac0db0d5807cee4ca8a5a695629f97e2d8d9cf9461/diff:/var/lib/docker/overlay2/b125406c01cea6a83fa5515a19bb6822d1194fcd47eeb1ed541b9304804a54be/diff:/var/lib/docker/overlay2/b49ae7a2c3101c5b094f611e08fb7b68d8688cb3c333066f697aafc1dc7c2c7e/diff:/var/lib/docker/overlay2/ce599106d279966257baab0cc43ed0366d690702b449073e812a47ae6698dedf/diff:/var/lib/docker/overlay2/5f005c2e8ab4cd52b59f5118e6f5e352dd834afde547ba1ee7b71141319e3547/diff:/var/lib/docker/overlay2/2b1f9abca5d32e21fe1da66b2604d858599b74fc9359bd55e050cebccaba5c7d/diff:/var/lib/d
ocker/overlay2/a5f956d0de2a0313dfbaefb921518d8a75267b71a9e7c68207a81682db5394b5/diff:/var/lib/docker/overlay2/e0050af32b9eb0f12404cf384139cd48050d4a969d090faaa07b9f42fe954627/diff:/var/lib/docker/overlay2/f18c15fd90b361f7a13265b5426d985a47e261abde790665028916551b5218f3/diff:/var/lib/docker/overlay2/0f266ad6b65c857206fd10e121b74564370ca213f5706493619b6a590c496660/diff:/var/lib/docker/overlay2/fc044060d3681022984120753b0c02afc05afbb256dbdfc9f7f5e966e1d98820/diff:/var/lib/docker/overlay2/91df5011d1388013be2af7bb3097195366fd38d1f46d472e630aab583779f7c0/diff:/var/lib/docker/overlay2/f810a7fbc880b9ff7c367b14e34088e851fa045d860ce4bf4c49999fcf814a6e/diff:/var/lib/docker/overlay2/318584cae4acc059b81627e00ae703167673c73d234d6e64e894fc3500750f90/diff:/var/lib/docker/overlay2/a2e1d86ffb5aec517fe891619294d506621a002f4c53e8d3103d5d4ce777ebaf/diff:/var/lib/docker/overlay2/12fd1d215a6881aa03a06f2b8a5415b483530db121b120b66940e1e5cd2e1b96/diff:/var/lib/docker/overlay2/28bbbfc0404aecb7d7d79b4c2bfec07cd44260c922a982af523bda70bbd
7be20/diff:/var/lib/docker/overlay2/4dc0077174d58a8904abddfc67a48e6dd082a1eebc72518af19da37b4eff7b2c/diff:/var/lib/docker/overlay2/4d39db844b44258dbb67b16662175b453df7bfd43274abbf1968486539955750/diff:/var/lib/docker/overlay2/ca34d73c6c31358a3eb714a014a5961863e05dee505a1cfca2c8829380ce362b/diff:/var/lib/docker/overlay2/0c0595112799a0b3604c58158946fb3d0657c4198a6a72e12fbe29a74174d3ea/diff:/var/lib/docker/overlay2/5fc43276da56e90293816918613014e7cec7bedc292a062d39d034c95d56351d/diff:/var/lib/docker/overlay2/71a282cb60752128ee370ced1695c67c421341d364956818e5852fd6714a0e64/diff:/var/lib/docker/overlay2/07723c7054e35caae4987fa66d3d1fd44de0d2875612274dde2bf04e8349b0a0/diff:/var/lib/docker/overlay2/0433db88749fb49b0f02cc65b7113c97134270991a8a82bbe7ff4432aae7e502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5af7e608f3e7ce63891da2570f6c83bf87e7fac78d49005b5b6a2b2315ec4e1b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5af7e608f3e7ce63891da2570f6c83bf87e7fac78d49005b5b6a2b2315ec4e1b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5af7e608f3e7ce63891da2570f6c83bf87e7fac78d49005b5b6a2b2315ec4e1b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210915195659-22848",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210915195659-22848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210915195659-22848",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210915195659-22848",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210915195659-22848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d04c07ea642c0fac2cab2b8538418a6098b4e911a70dcd44e63fda8b2b613a8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/7d04c07ea642",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210915195659-22848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4e9e2cf00b47",
	                        "scheduled-stop-20210915195659-22848"
	                    ],
	                    "NetworkID": "00de2ade45cdd90edfa8e020539ed698f1b495aa53d77dad1ada49caadcb697c",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210915195659-22848 -n scheduled-stop-20210915195659-22848
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210915195659-22848 -n scheduled-stop-20210915195659-22848: exit status 7 (2.4754905s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "scheduled-stop-20210915195659-22848" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "scheduled-stop-20210915195659-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20210915195659-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20210915195659-22848: (14.0247464s)
--- FAIL: TestScheduledStopWindows (250.39s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (37.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-20210915200708-22848 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-20210915200708-22848 --output=json --layout=cluster: exit status 2 (5.5402718s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b369045c-c4e8-4e9d-8345-f2e82b8cf9b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Executing \"docker container inspect pause-20210915200708-22848 --format={{.State.Status}}\" took an unusually long time: 2.0039232s"}}
	{"specversion":"1.0","id":"4fee7bda-7a59-4cfd-a7df-daa5881f8cca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Restarting the docker service may improve performance."}}
	{"Name":"pause-20210915200708-22848","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 13 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.23.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210915200708-22848","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:187: unmarshalling: invalid character '{' after top-level value
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210915200708-22848
helpers_test.go:236: (dbg) docker inspect pause-20210915200708-22848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91",
	        "Created": "2021-09-15T20:07:32.4011154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 124014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T20:07:36.7750208Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/hostname",
	        "HostsPath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/hosts",
	        "LogPath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91-json.log",
	        "Name": "/pause-20210915200708-22848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20210915200708-22848:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210915200708-22848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68-init/diff:/var/lib/docker/overlay2/a259804ff45c264548e9459111f8eb7e789339b3253b50b62afde896e9e19e34/diff:/var/lib/docker/overlay2/61882a81480713e64bf02bef67583a0609b2be0589d08187547a88789584af86/diff:/var/lib/docker/overlay2/a41d1f5e24156c1d438fe25c567f3c3492d15cb77b1bf5545be9086be845138a/diff:/var/lib/docker/overlay2/86e30e10438032d0a02b54850ad0316347488f3d5b831234af1e91f943269850/diff:/var/lib/docker/overlay2/f6962936c0c1b0636454847e8e963a472786602e15a00d5e020827c2372acfce/diff:/var/lib/docker/overlay2/5eee83c6029359aefecbba85cc6d456e3a5a97c3ef6e9f4850e8a53c62b30ef5/diff:/var/lib/docker/overlay2/fdaa4e134ab960962e0a388adaa3a6aa59dd139cc016dfd4cdf4565bc80e8469/diff:/var/lib/docker/overlay2/9e1b9be7e17136fa81b0a224e2fab9704d3234ca119d87c14f9a676bbdb023f5/diff:/var/lib/docker/overlay2/ffe06185e93cb7ae8d48d84ea9be8817f2ae3d2aae85114ce41477579e23debd/diff:/var/lib/docker/overlay2/221713
20a621ffe79c2acb0c13308b1b0cd3bc94a4083992e7b8589b820c625c/diff:/var/lib/docker/overlay2/eb2fb3ccafd6cb1c26a9642601357b3e0563e9e9361a5ab359bf1af592a0d709/diff:/var/lib/docker/overlay2/6081368e802a14f6f6a7424eb7af3f5f29f85bf59ed0a0709ce25b53738095cb/diff:/var/lib/docker/overlay2/fd7176e5912a824a0543fa3ab5170921538a287401ff8a451c90e1ef0fd8adea/diff:/var/lib/docker/overlay2/eec5078968f5e7332ff82191a780be0efef38aef75ea7cd67723ab3d2760c281/diff:/var/lib/docker/overlay2/d18d41a44c04cb695c4b69ac0db0d5807cee4ca8a5a695629f97e2d8d9cf9461/diff:/var/lib/docker/overlay2/b125406c01cea6a83fa5515a19bb6822d1194fcd47eeb1ed541b9304804a54be/diff:/var/lib/docker/overlay2/b49ae7a2c3101c5b094f611e08fb7b68d8688cb3c333066f697aafc1dc7c2c7e/diff:/var/lib/docker/overlay2/ce599106d279966257baab0cc43ed0366d690702b449073e812a47ae6698dedf/diff:/var/lib/docker/overlay2/5f005c2e8ab4cd52b59f5118e6f5e352dd834afde547ba1ee7b71141319e3547/diff:/var/lib/docker/overlay2/2b1f9abca5d32e21fe1da66b2604d858599b74fc9359bd55e050cebccaba5c7d/diff:/var/lib/d
ocker/overlay2/a5f956d0de2a0313dfbaefb921518d8a75267b71a9e7c68207a81682db5394b5/diff:/var/lib/docker/overlay2/e0050af32b9eb0f12404cf384139cd48050d4a969d090faaa07b9f42fe954627/diff:/var/lib/docker/overlay2/f18c15fd90b361f7a13265b5426d985a47e261abde790665028916551b5218f3/diff:/var/lib/docker/overlay2/0f266ad6b65c857206fd10e121b74564370ca213f5706493619b6a590c496660/diff:/var/lib/docker/overlay2/fc044060d3681022984120753b0c02afc05afbb256dbdfc9f7f5e966e1d98820/diff:/var/lib/docker/overlay2/91df5011d1388013be2af7bb3097195366fd38d1f46d472e630aab583779f7c0/diff:/var/lib/docker/overlay2/f810a7fbc880b9ff7c367b14e34088e851fa045d860ce4bf4c49999fcf814a6e/diff:/var/lib/docker/overlay2/318584cae4acc059b81627e00ae703167673c73d234d6e64e894fc3500750f90/diff:/var/lib/docker/overlay2/a2e1d86ffb5aec517fe891619294d506621a002f4c53e8d3103d5d4ce777ebaf/diff:/var/lib/docker/overlay2/12fd1d215a6881aa03a06f2b8a5415b483530db121b120b66940e1e5cd2e1b96/diff:/var/lib/docker/overlay2/28bbbfc0404aecb7d7d79b4c2bfec07cd44260c922a982af523bda70bbd
7be20/diff:/var/lib/docker/overlay2/4dc0077174d58a8904abddfc67a48e6dd082a1eebc72518af19da37b4eff7b2c/diff:/var/lib/docker/overlay2/4d39db844b44258dbb67b16662175b453df7bfd43274abbf1968486539955750/diff:/var/lib/docker/overlay2/ca34d73c6c31358a3eb714a014a5961863e05dee505a1cfca2c8829380ce362b/diff:/var/lib/docker/overlay2/0c0595112799a0b3604c58158946fb3d0657c4198a6a72e12fbe29a74174d3ea/diff:/var/lib/docker/overlay2/5fc43276da56e90293816918613014e7cec7bedc292a062d39d034c95d56351d/diff:/var/lib/docker/overlay2/71a282cb60752128ee370ced1695c67c421341d364956818e5852fd6714a0e64/diff:/var/lib/docker/overlay2/07723c7054e35caae4987fa66d3d1fd44de0d2875612274dde2bf04e8349b0a0/diff:/var/lib/docker/overlay2/0433db88749fb49b0f02cc65b7113c97134270991a8a82bbe7ff4432aae7e502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210915200708-22848",
	                "Source": "/var/lib/docker/volumes/pause-20210915200708-22848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210915200708-22848",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210915200708-22848",
	                "name.minikube.sigs.k8s.io": "pause-20210915200708-22848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b303abbfc950a029f267f11d8fae0399114d5910193701a97b665a4d89b4f95",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57016"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57017"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57018"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57019"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57020"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5b303abbfc95",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210915200708-22848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6554305ea083",
	                        "pause-20210915200708-22848"
	                    ],
	                    "NetworkID": "2dbb42bd7b08376522eb0245187f89c840890e2bd5477636f0e988415ead885b",
	                    "EndpointID": "0624522ede3ba62b8b47fa2fdff97a9a6b6137753f98b2fb64c8b3edf410be73",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915200708-22848 -n pause-20210915200708-22848
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915200708-22848 -n pause-20210915200708-22848: exit status 2 (5.7306857s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect pause-20210915200708-22848 --format={{.State.Status}}" took an unusually long time: 2.1242384s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-20210915200708-22848 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p pause-20210915200708-22848 logs -n 25: exit status 110 (24.6119111s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |------------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	|  Command   |                   Args                    |                  Profile                  |          User           | Version |          Start Time           |           End Time            |
	|------------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| stop       | -p                                        | multinode-20210915192401-22848            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:36:48 GMT | Wed, 15 Sep 2021 19:37:25 GMT |
	|            | multinode-20210915192401-22848            |                                           |                         |         |                               |                               |
	| start      | -p                                        | multinode-20210915192401-22848            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:37:26 GMT | Wed, 15 Sep 2021 19:41:17 GMT |
	|            | multinode-20210915192401-22848            |                                           |                         |         |                               |                               |
	|            | --wait=true -v=8                          |                                           |                         |         |                               |                               |
	|            | --alsologtostderr                         |                                           |                         |         |                               |                               |
	| -p         | multinode-20210915192401-22848            | multinode-20210915192401-22848            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:41:18 GMT | Wed, 15 Sep 2021 19:41:42 GMT |
	|            | node delete m03                           |                                           |                         |         |                               |                               |
	| -p         | multinode-20210915192401-22848            | multinode-20210915192401-22848            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:41:52 GMT | Wed, 15 Sep 2021 19:42:24 GMT |
	|            | stop                                      |                                           |                         |         |                               |                               |
	| start      | -p                                        | multinode-20210915192401-22848            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:42:32 GMT | Wed, 15 Sep 2021 19:45:05 GMT |
	|            | multinode-20210915192401-22848            |                                           |                         |         |                               |                               |
	|            | --wait=true -v=8                          |                                           |                         |         |                               |                               |
	|            | --alsologtostderr                         |                                           |                         |         |                               |                               |
	|            | --driver=docker                           |                                           |                         |         |                               |                               |
	| start      | -p                                        | multinode-20210915192401-22848-m03        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:45:15 GMT | Wed, 15 Sep 2021 19:49:02 GMT |
	|            | multinode-20210915192401-22848-m03        |                                           |                         |         |                               |                               |
	|            | --driver=docker                           |                                           |                         |         |                               |                               |
	| delete     | -p                                        | multinode-20210915192401-22848-m03        | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:49:09 GMT | Wed, 15 Sep 2021 19:49:29 GMT |
	|            | multinode-20210915192401-22848-m03        |                                           |                         |         |                               |                               |
	| delete     | -p                                        | multinode-20210915192401-22848            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:49:29 GMT | Wed, 15 Sep 2021 19:49:58 GMT |
	|            | multinode-20210915192401-22848            |                                           |                         |         |                               |                               |
	| start      | -p                                        | test-preload-20210915194958-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:49:59 GMT | Wed, 15 Sep 2021 19:53:51 GMT |
	|            | test-preload-20210915194958-22848         |                                           |                         |         |                               |                               |
	|            | --memory=2200 --alsologtostderr           |                                           |                         |         |                               |                               |
	|            | --wait=true --preload=false               |                                           |                         |         |                               |                               |
	|            | --driver=docker                           |                                           |                         |         |                               |                               |
	|            | --kubernetes-version=v1.17.0              |                                           |                         |         |                               |                               |
	| ssh        | -p                                        | test-preload-20210915194958-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:53:51 GMT | Wed, 15 Sep 2021 19:53:58 GMT |
	|            | test-preload-20210915194958-22848         |                                           |                         |         |                               |                               |
	|            | -- docker pull busybox                    |                                           |                         |         |                               |                               |
	| start      | -p                                        | test-preload-20210915194958-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:53:58 GMT | Wed, 15 Sep 2021 19:56:37 GMT |
	|            | test-preload-20210915194958-22848         |                                           |                         |         |                               |                               |
	|            | --memory=2200 --alsologtostderr           |                                           |                         |         |                               |                               |
	|            | -v=1 --wait=true --driver=docker          |                                           |                         |         |                               |                               |
	|            | --kubernetes-version=v1.17.3              |                                           |                         |         |                               |                               |
	| ssh        | -p                                        | test-preload-20210915194958-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:56:38 GMT | Wed, 15 Sep 2021 19:56:42 GMT |
	|            | test-preload-20210915194958-22848         |                                           |                         |         |                               |                               |
	|            | -- docker images                          |                                           |                         |         |                               |                               |
	| delete     | -p                                        | test-preload-20210915194958-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:56:43 GMT | Wed, 15 Sep 2021 19:56:59 GMT |
	|            | test-preload-20210915194958-22848         |                                           |                         |         |                               |                               |
	| start      | -p                                        | scheduled-stop-20210915195659-22848       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 19:57:00 GMT | Wed, 15 Sep 2021 20:00:11 GMT |
	|            | scheduled-stop-20210915195659-22848       |                                           |                         |         |                               |                               |
	|            | --memory=2048 --driver=docker             |                                           |                         |         |                               |                               |
	| stop       | -p                                        | scheduled-stop-20210915195659-22848       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:00:12 GMT | Wed, 15 Sep 2021 20:00:18 GMT |
	|            | scheduled-stop-20210915195659-22848       |                                           |                         |         |                               |                               |
	|            | --schedule 5m                             |                                           |                         |         |                               |                               |
	| ssh        | -p                                        | scheduled-stop-20210915195659-22848       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:00:24 GMT | Wed, 15 Sep 2021 20:00:28 GMT |
	|            | scheduled-stop-20210915195659-22848       |                                           |                         |         |                               |                               |
	|            | -- sudo systemctl show                    |                                           |                         |         |                               |                               |
	|            | minikube-scheduled-stop --no-page         |                                           |                         |         |                               |                               |
	| stop       | -p                                        | scheduled-stop-20210915195659-22848       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:00:28 GMT | Wed, 15 Sep 2021 20:00:32 GMT |
	|            | scheduled-stop-20210915195659-22848       |                                           |                         |         |                               |                               |
	|            | --schedule 5s                             |                                           |                         |         |                               |                               |
	| delete     | -p                                        | scheduled-stop-20210915195659-22848       | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:00:56 GMT | Wed, 15 Sep 2021 20:01:10 GMT |
	|            | scheduled-stop-20210915195659-22848       |                                           |                         |         |                               |                               |
	| start      | -p                                        | skaffold-20210915200110-22848             | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:01:12 GMT | Wed, 15 Sep 2021 20:04:21 GMT |
	|            | skaffold-20210915200110-22848             |                                           |                         |         |                               |                               |
	|            | --memory=2600 --driver=docker             |                                           |                         |         |                               |                               |
	| docker-env | --shell none -p                           | skaffold-20210915200110-22848             | skaffold                | v1.23.0 | Wed, 15 Sep 2021 20:04:24 GMT | Wed, 15 Sep 2021 20:04:30 GMT |
	|            | skaffold-20210915200110-22848             |                                           |                         |         |                               |                               |
	|            | --user=skaffold                           |                                           |                         |         |                               |                               |
	| delete     | -p                                        | skaffold-20210915200110-22848             | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:05:57 GMT | Wed, 15 Sep 2021 20:06:14 GMT |
	|            | skaffold-20210915200110-22848             |                                           |                         |         |                               |                               |
	| delete     | -p                                        | insufficient-storage-20210915200614-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:06:56 GMT | Wed, 15 Sep 2021 20:07:07 GMT |
	|            | insufficient-storage-20210915200614-22848 |                                           |                         |         |                               |                               |
	| start      | -p pause-20210915200708-22848             | pause-20210915200708-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:07:08 GMT | Wed, 15 Sep 2021 20:15:59 GMT |
	|            | --memory=2048                             |                                           |                         |         |                               |                               |
	|            | --install-addons=false                    |                                           |                         |         |                               |                               |
	|            | --wait=all --driver=docker                |                                           |                         |         |                               |                               |
	| start      | -p pause-20210915200708-22848             | pause-20210915200708-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:16:00 GMT | Wed, 15 Sep 2021 20:17:33 GMT |
	|            | --alsologtostderr -v=1                    |                                           |                         |         |                               |                               |
	|            | --driver=docker                           |                                           |                         |         |                               |                               |
	| pause      | -p pause-20210915200708-22848             | pause-20210915200708-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:17:33 GMT | Wed, 15 Sep 2021 20:17:43 GMT |
	|            | --alsologtostderr -v=5                    |                                           |                         |         |                               |                               |
	|------------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 20:16:00
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 20:16:00.273240   20224 out.go:298] Setting OutFile to fd 2684 ...
	I0915 20:16:00.275787   20224 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 20:16:00.275787   20224 out.go:311] Setting ErrFile to fd 2476...
	I0915 20:16:00.275787   20224 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 20:16:00.306750   20224 out.go:305] Setting JSON to false
	I0915 20:16:00.315834   20224 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9156433,"bootTime":1622580527,"procs":159,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 20:16:00.316139   20224 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 20:16:00.318633   20224 out.go:177] * [pause-20210915200708-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 20:16:00.318633   20224 notify.go:169] Checking for updates...
	I0915 20:16:00.318633   20224 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 20:16:00.326015   20224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 20:15:59.919766   24676 pod_ready.go:102] pod "coredns-78fcd69978-r7sp5" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:00.329362   20224 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 20:16:00.331563   20224 config.go:177] Loaded profile config "pause-20210915200708-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 20:16:00.347344   20224 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 20:16:02.850161   20224 docker.go:132] docker version: linux-20.10.5
	I0915 20:16:02.861476   20224 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 20:16:04.136856   20224 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2750517s)
	I0915 20:16:04.137629   20224 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:true NGoroutines:78 SystemTime:2021-09-15 20:16:03.550151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] C
lientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 20:16:04.147170   20224 out.go:177] * Using the docker driver based on existing profile
	I0915 20:16:04.147605   20224 start.go:278] selected driver: docker
	I0915 20:16:04.147605   20224 start.go:751] validating driver "docker" against &{Name:pause-20210915200708-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:pause-20210915200708-22848 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 20:16:04.147970   20224 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 20:16:04.185123   20224 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 20:16:05.480887   20224 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2957722s)
	I0915 20:16:05.480887   20224 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:true NGoroutines:78 SystemTime:2021-09-15 20:16:04.818653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index
.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] C
lientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 20:16:05.702998   20224 cni.go:93] Creating CNI manager for ""
	I0915 20:16:05.703138   20224 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 20:16:05.703138   20224 start_flags.go:278] config:
	{Name:pause-20210915200708-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:pause-20210915200708-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 20:16:05.711586   20224 out.go:177] * Starting control plane node pause-20210915200708-22848 in cluster pause-20210915200708-22848
	I0915 20:16:05.711586   20224 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 20:16:05.714547   20224 out.go:177] * Pulling base image ...
	I0915 20:16:05.715320   20224 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 20:16:05.715320   20224 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 20:16:05.715836   20224 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
	I0915 20:16:05.715987   20224 cache.go:57] Caching tarball of preloaded images
	I0915 20:16:05.716869   20224 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 20:16:05.717568   20224 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.1 on docker
	I0915 20:16:05.718011   20224 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915200708-22848\config.json ...
	I0915 20:16:06.691702   20224 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
	I0915 20:16:06.691702   20224 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
	I0915 20:16:06.692507   20224 cache.go:206] Successfully downloaded all kic artifacts
	I0915 20:16:06.692819   20224 start.go:313] acquiring machines lock for pause-20210915200708-22848: {Name:mke1f2789b9871627909df491d3322807f163c1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 20:16:06.693305   20224 start.go:317] acquired machines lock for "pause-20210915200708-22848" in 486.7µs
	I0915 20:16:06.693673   20224 start.go:93] Skipping create...Using existing machine configuration
	I0915 20:16:06.693849   20224 fix.go:55] fixHost starting: 
	I0915 20:16:06.721359   20224 cli_runner.go:115] Run: docker container inspect pause-20210915200708-22848 --format={{.State.Status}}
	I0915 20:16:07.575574   20224 fix.go:108] recreateIfNeeded on pause-20210915200708-22848: state=Running err=<nil>
	W0915 20:16:07.575691   20224 fix.go:134] unexpected machine state, will restart: <nil>
	I0915 20:16:03.287496   24676 pod_ready.go:102] pod "coredns-78fcd69978-r7sp5" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:05.373908   24676 pod_ready.go:102] pod "coredns-78fcd69978-r7sp5" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:07.959082   24676 pod_ready.go:102] pod "coredns-78fcd69978-r7sp5" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:07.581293   20224 out.go:177] * Updating the running docker "pause-20210915200708-22848" container ...
	I0915 20:16:07.581565   20224 machine.go:88] provisioning docker machine ...
	I0915 20:16:07.581903   20224 ubuntu.go:169] provisioning hostname "pause-20210915200708-22848"
	I0915 20:16:07.604552   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:08.448581   20224 main.go:130] libmachine: Using SSH client type: native
	I0915 20:16:08.449297   20224 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 57016 <nil> <nil>}
	I0915 20:16:08.449297   20224 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210915200708-22848 && echo "pause-20210915200708-22848" | sudo tee /etc/hostname
	I0915 20:16:10.921003   24676 pod_ready.go:102] pod "coredns-78fcd69978-r7sp5" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:10.031414   20224 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210915200708-22848
	
	I0915 20:16:10.050247   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:10.824328   20224 main.go:130] libmachine: Using SSH client type: native
	I0915 20:16:10.824793   20224 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 57016 <nil> <nil>}
	I0915 20:16:10.825027   20224 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210915200708-22848' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210915200708-22848/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210915200708-22848' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 20:16:12.823373   20224 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 20:16:12.823373   20224 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0915 20:16:12.823373   20224 ubuntu.go:177] setting up certificates
	I0915 20:16:12.823373   20224 provision.go:83] configureAuth start
	I0915 20:16:12.840063   20224 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210915200708-22848
	I0915 20:16:13.731270   20224 provision.go:138] copyHostCerts
	I0915 20:16:13.731814   20224 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0915 20:16:13.732147   20224 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0915 20:16:13.732147   20224 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 20:16:13.732147   20224 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0915 20:16:13.732147   20224 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0915 20:16:13.732147   20224 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 20:16:13.744967   20224 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0915 20:16:13.744967   20224 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0915 20:16:13.746027   20224 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1675 bytes)
	I0915 20:16:13.748135   20224 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-20210915200708-22848 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210915200708-22848]
	I0915 20:16:13.938507   20224 provision.go:172] copyRemoteCerts
	I0915 20:16:13.951334   20224 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 20:16:13.968289   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:13.336111   24676 pod_ready.go:102] pod "coredns-78fcd69978-r7sp5" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:15.800286   24676 pod_ready.go:102] pod "coredns-78fcd69978-r7sp5" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:17.575626   24676 pod_ready.go:92] pod "coredns-78fcd69978-r7sp5" in "kube-system" namespace has status "Ready":"True"
	I0915 20:16:17.575626   24676 pod_ready.go:81] duration metric: took 41.6896849s waiting for pod "coredns-78fcd69978-r7sp5" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:17.575626   24676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:14.889627   20224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57016 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210915200708-22848\id_rsa Username:docker}
	I0915 20:16:15.956497   20224 ssh_runner.go:192] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.0051761s)
	I0915 20:16:15.957209   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 20:16:16.789117   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0915 20:16:17.318938   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 20:16:17.855429   20224 provision.go:86] duration metric: configureAuth took 5.0320884s
	I0915 20:16:17.855429   20224 ubuntu.go:193] setting minikube options for container-runtime
	I0915 20:16:17.856107   20224 config.go:177] Loaded profile config "pause-20210915200708-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 20:16:17.877439   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:18.811042   20224 main.go:130] libmachine: Using SSH client type: native
	I0915 20:16:18.811860   20224 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 57016 <nil> <nil>}
	I0915 20:16:18.811860   20224 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 20:16:20.075063   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:22.305216   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:20.231781   20224 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 20:16:20.231961   20224 ubuntu.go:71] root file system type: overlay
	I0915 20:16:20.232564   20224 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 20:16:20.259724   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:21.167059   20224 main.go:130] libmachine: Using SSH client type: native
	I0915 20:16:21.167637   20224 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 57016 <nil> <nil>}
	I0915 20:16:21.168079   20224 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 20:16:22.264723   20224 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 20:16:22.280386   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:23.163753   20224 main.go:130] libmachine: Using SSH client type: native
	I0915 20:16:23.164485   20224 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 57016 <nil> <nil>}
	I0915 20:16:23.164736   20224 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 20:16:24.355576   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:26.761638   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:25.010473   20224 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 20:16:25.010650   20224 machine.go:91] provisioned docker machine in 17.4290607s
	I0915 20:16:25.010650   20224 start.go:267] post-start starting for "pause-20210915200708-22848" (driver="docker")
	I0915 20:16:25.010650   20224 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 20:16:25.026950   20224 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 20:16:25.046773   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:25.844440   20224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57016 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210915200708-22848\id_rsa Username:docker}
	I0915 20:16:26.394780   20224 ssh_runner.go:192] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3675465s)
	I0915 20:16:26.422542   20224 ssh_runner.go:152] Run: cat /etc/os-release
	I0915 20:16:26.536887   20224 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 20:16:26.536887   20224 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 20:16:26.537239   20224 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 20:16:26.537239   20224 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0915 20:16:26.537361   20224 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0915 20:16:26.537458   20224 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0915 20:16:26.539080   20224 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem -> 228482.pem in /etc/ssl/certs
	I0915 20:16:26.561597   20224 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
	I0915 20:16:26.891006   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem --> /etc/ssl/certs/228482.pem (1708 bytes)
	I0915 20:16:27.937115   20224 start.go:270] post-start completed in 2.926484s
	I0915 20:16:27.958658   20224 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 20:16:27.967324   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:28.841967   20224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57016 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210915200708-22848\id_rsa Username:docker}
	I0915 20:16:29.610443   20224 ssh_runner.go:192] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.6517954s)
	I0915 20:16:29.610443   20224 fix.go:57] fixHost completed within 22.9167408s
	I0915 20:16:29.610443   20224 start.go:80] releasing machines lock for "pause-20210915200708-22848", held for 22.9172843s
	I0915 20:16:29.623796   20224 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210915200708-22848
	I0915 20:16:28.811969   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:31.668587   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:30.552398   20224 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0915 20:16:30.570670   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:30.573660   20224 ssh_runner.go:152] Run: systemctl --version
	I0915 20:16:30.583681   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:31.580688   20224 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848: (1.0100246s)
	I0915 20:16:31.580688   20224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57016 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210915200708-22848\id_rsa Username:docker}
	I0915 20:16:31.629911   20224 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848: (1.0452453s)
	I0915 20:16:31.630290   20224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57016 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210915200708-22848\id_rsa Username:docker}
	I0915 20:16:33.100261   20224 ssh_runner.go:192] Completed: systemctl --version: (2.5266172s)
	I0915 20:16:33.121226   20224 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I0915 20:16:33.850789   20224 ssh_runner.go:192] Completed: curl -sS -m 2 https://k8s.gcr.io/: (3.2984124s)
	I0915 20:16:33.871910   20224 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 20:16:34.356163   20224 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I0915 20:16:34.374933   20224 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I0915 20:16:34.520293   20224 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 20:16:33.886479   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:35.914539   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:34.892822   20224 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I0915 20:16:36.978838   20224 ssh_runner.go:192] Completed: sudo systemctl unmask docker.service: (2.0860295s)
	I0915 20:16:37.010092   20224 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I0915 20:16:38.411145   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:40.419810   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:40.115437   20224 ssh_runner.go:192] Completed: sudo systemctl enable docker.socket: (3.1053648s)
	I0915 20:16:40.132363   20224 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 20:16:40.369202   20224 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I0915 20:16:43.104852   20224 ssh_runner.go:192] Completed: sudo systemctl daemon-reload: (2.735668s)
	I0915 20:16:43.125969   20224 ssh_runner.go:152] Run: sudo systemctl start docker
	I0915 20:16:43.946922   20224 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 20:16:45.661032   20224 ssh_runner.go:192] Completed: docker version --format {{.Server.Version}}: (1.7141204s)
	I0915 20:16:45.684054   20224 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 20:16:47.131173   20224 ssh_runner.go:192] Completed: docker version --format {{.Server.Version}}: (1.4471287s)
	I0915 20:16:43.522925   24676 pod_ready.go:102] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"False"
	I0915 20:16:45.956650   24676 pod_ready.go:92] pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace has status "Ready":"True"
	I0915 20:16:45.956973   24676 pod_ready.go:81] duration metric: took 28.3812065s waiting for pod "coredns-78fcd69978-tj7pt" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:45.956973   24676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-offline-docker-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:46.088541   24676 pod_ready.go:92] pod "etcd-offline-docker-20210915200708-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:16:46.088718   24676 pod_ready.go:81] duration metric: took 131.6115ms waiting for pod "etcd-offline-docker-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:46.088718   24676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-offline-docker-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:46.162374   24676 pod_ready.go:92] pod "kube-apiserver-offline-docker-20210915200708-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:16:46.162374   24676 pod_ready.go:81] duration metric: took 73.6569ms waiting for pod "kube-apiserver-offline-docker-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:46.162374   24676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-offline-docker-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:46.313699   24676 pod_ready.go:92] pod "kube-controller-manager-offline-docker-20210915200708-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:16:46.313699   24676 pod_ready.go:81] duration metric: took 151.3259ms waiting for pod "kube-controller-manager-offline-docker-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:46.313699   24676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fr95k" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:46.472449   24676 pod_ready.go:92] pod "kube-proxy-fr95k" in "kube-system" namespace has status "Ready":"True"
	I0915 20:16:46.472580   24676 pod_ready.go:81] duration metric: took 158.8821ms waiting for pod "kube-proxy-fr95k" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:46.472580   24676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-offline-docker-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:46.553560   24676 pod_ready.go:92] pod "kube-scheduler-offline-docker-20210915200708-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:16:46.553695   24676 pod_ready.go:81] duration metric: took 81.115ms waiting for pod "kube-scheduler-offline-docker-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:16:46.553803   24676 pod_ready.go:38] duration metric: took 1m11.0420317s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 20:16:46.553803   24676 api_server.go:50] waiting for apiserver process to appear ...
	I0915 20:16:46.577121   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 20:16:47.527461   24676 logs.go:270] 1 containers: [cade9f9b4edd]
	I0915 20:16:47.544665   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 20:16:47.135088   20224 out.go:204] * Preparing Kubernetes v1.22.1 on Docker 20.10.8 ...
	I0915 20:16:47.146100   20224 cli_runner.go:115] Run: docker exec -t pause-20210915200708-22848 dig +short host.docker.internal
	I0915 20:16:49.059977   20224 cli_runner.go:168] Completed: docker exec -t pause-20210915200708-22848 dig +short host.docker.internal: (1.9138897s)
	I0915 20:16:49.059977   20224 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0915 20:16:49.074928   20224 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0915 20:16:49.529621   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:16:48.395929   24676 logs.go:270] 1 containers: [f66ec49c3f94]
	I0915 20:16:48.415299   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 20:16:50.640819   24676 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (2.2255343s)
	I0915 20:16:50.641037   24676 logs.go:270] 1 containers: [5cbf0d0f1d37]
	I0915 20:16:50.659962   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 20:16:50.368260   20224 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 20:16:50.380247   20224 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 20:16:52.374035   20224 ssh_runner.go:192] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.9938014s)
	I0915 20:16:52.374171   20224 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.1
	k8s.gcr.io/kube-scheduler:v1.22.1
	k8s.gcr.io/kube-proxy:v1.22.1
	k8s.gcr.io/kube-controller-manager:v1.22.1
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0915 20:16:52.374171   20224 docker.go:489] Images already preloaded, skipping extraction
	I0915 20:16:52.390286   20224 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 20:16:53.964793   20224 ssh_runner.go:192] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.5745173s)
	I0915 20:16:53.965002   20224 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.1
	k8s.gcr.io/kube-proxy:v1.22.1
	k8s.gcr.io/kube-scheduler:v1.22.1
	k8s.gcr.io/kube-controller-manager:v1.22.1
	k8s.gcr.io/etcd:3.5.0-0
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0915 20:16:53.965246   20224 cache_images.go:78] Images are preloaded, skipping loading
	I0915 20:16:53.987701   20224 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
	I0915 20:16:53.298088   24676 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (2.6375407s)
	I0915 20:16:53.298088   24676 logs.go:270] 1 containers: [1d5441743b86]
	I0915 20:16:53.317575   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 20:16:55.151271   24676 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (1.8337085s)
	I0915 20:16:55.151271   24676 logs.go:270] 1 containers: [b530d1d9d2dc]
	I0915 20:16:55.180532   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 20:16:56.884252   24676 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (1.703731s)
	I0915 20:16:56.884252   24676 logs.go:270] 0 containers: []
	W0915 20:16:56.884537   24676 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0915 20:16:56.904254   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 20:16:58.735725   20224 ssh_runner.go:192] Completed: docker info --format {{.CgroupDriver}}: (4.7478924s)
	I0915 20:16:58.735898   20224 cni.go:93] Creating CNI manager for ""
	I0915 20:16:58.736279   20224 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 20:16:58.736279   20224 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0915 20:16:58.736647   20224 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210915200708-22848 NodeName:pause-20210915200708-22848 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0915 20:16:58.737351   20224 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "pause-20210915200708-22848"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 20:16:58.738388   20224 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=pause-20210915200708-22848 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.1 ClusterName:pause-20210915200708-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0915 20:16:58.764824   20224 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.1
	I0915 20:16:59.081196   20224 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 20:16:59.102366   20224 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 20:16:59.350410   20224 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (352 bytes)
	I0915 20:16:59.651370   20224 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 20:16:58.679693   24676 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (1.7752419s)
	I0915 20:16:58.679693   24676 logs.go:270] 1 containers: [fb131ec39e15]
	I0915 20:16:58.705990   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 20:16:59.699957   24676 logs.go:270] 2 containers: [14ce8c55a5a9 c2f7bdd669ac]
	I0915 20:16:59.699957   24676 logs.go:123] Gathering logs for kube-controller-manager [14ce8c55a5a9] ...
	I0915 20:16:59.699957   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 14ce8c55a5a9"
	I0915 20:17:01.108835   24676 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 14ce8c55a5a9": (1.408887s)
	I0915 20:17:01.129984   24676 logs.go:123] Gathering logs for Docker ...
	I0915 20:17:01.130150   24676 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 20:17:01.685503   24676 logs.go:123] Gathering logs for describe nodes ...
	I0915 20:17:01.685503   24676 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 20:17:00.002275   20224 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0915 20:17:00.507308   20224 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 20:17:00.686431   20224 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915200708-22848 for IP: 192.168.49.2
	I0915 20:17:00.687048   20224 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0915 20:17:00.687289   20224 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0915 20:17:00.688363   20224 certs.go:293] skipping minikube-user signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915200708-22848\client.key
	I0915 20:17:00.688915   20224 certs.go:293] skipping minikube signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915200708-22848\apiserver.key.dd3b5fb2
	I0915 20:17:00.689371   20224 certs.go:293] skipping aggregator signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915200708-22848\proxy-client.key
	I0915 20:17:00.692775   20224 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22848.pem (1338 bytes)
	W0915 20:17:00.693377   20224 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22848_empty.pem, impossibly tiny 0 bytes
	I0915 20:17:00.693642   20224 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0915 20:17:00.694052   20224 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0915 20:17:00.694632   20224 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0915 20:17:00.695176   20224 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0915 20:17:00.695884   20224 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem (1708 bytes)
	I0915 20:17:00.702763   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915200708-22848\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0915 20:17:01.156147   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915200708-22848\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 20:17:01.980861   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915200708-22848\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 20:17:02.615234   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\pause-20210915200708-22848\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 20:17:03.182486   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 20:17:03.877512   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0915 20:17:04.442857   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 20:17:07.117571   24676 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (5.4319018s)
	I0915 20:17:07.121972   24676 logs.go:123] Gathering logs for kube-apiserver [cade9f9b4edd] ...
	I0915 20:17:07.122302   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 cade9f9b4edd"
	I0915 20:17:05.251794   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 20:17:05.885166   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 20:17:06.274383   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\22848.pem --> /usr/share/ca-certificates/22848.pem (1338 bytes)
	I0915 20:17:06.802961   20224 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem --> /usr/share/ca-certificates/228482.pem (1708 bytes)
	I0915 20:17:07.140473   20224 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 20:17:07.907426   20224 ssh_runner.go:152] Run: openssl version
	I0915 20:17:08.134549   20224 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 20:17:08.581935   20224 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 20:17:08.700467   20224 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 18:34 /usr/share/ca-certificates/minikubeCA.pem
	I0915 20:17:08.720785   20224 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 20:17:09.180891   20224 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 20:17:09.357319   20224 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22848.pem && ln -fs /usr/share/ca-certificates/22848.pem /etc/ssl/certs/22848.pem"
	I0915 20:17:09.508152   20224 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/22848.pem
	I0915 20:17:09.627517   20224 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Sep 15 18:55 /usr/share/ca-certificates/22848.pem
	I0915 20:17:09.670578   20224 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22848.pem
	I0915 20:17:09.309981   24676 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 cade9f9b4edd": (2.1876938s)
	I0915 20:17:09.354309   24676 logs.go:123] Gathering logs for coredns [5cbf0d0f1d37] ...
	I0915 20:17:09.354309   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5cbf0d0f1d37"
	I0915 20:17:09.960825   24676 logs.go:123] Gathering logs for kube-scheduler [1d5441743b86] ...
	I0915 20:17:09.961067   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 1d5441743b86"
	I0915 20:17:11.664297   24676 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 1d5441743b86": (1.7030645s)
	I0915 20:17:11.690565   24676 logs.go:123] Gathering logs for storage-provisioner [fb131ec39e15] ...
	I0915 20:17:11.690763   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 fb131ec39e15"
	I0915 20:17:12.640371   24676 logs.go:123] Gathering logs for container status ...
	I0915 20:17:12.640761   24676 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 20:17:09.861439   20224 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22848.pem /etc/ssl/certs/51391683.0"
	I0915 20:17:10.015162   20224 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228482.pem && ln -fs /usr/share/ca-certificates/228482.pem /etc/ssl/certs/228482.pem"
	I0915 20:17:10.324402   20224 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/228482.pem
	I0915 20:17:10.522200   20224 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Sep 15 18:55 /usr/share/ca-certificates/228482.pem
	I0915 20:17:10.567796   20224 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228482.pem
	I0915 20:17:10.906916   20224 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228482.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 20:17:11.152119   20224 kubeadm.go:390] StartCluster: {Name:pause-20210915200708-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:pause-20210915200708-22848 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 20:17:11.170264   20224 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 20:17:12.626143   20224 ssh_runner.go:192] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (1.455708s)
	I0915 20:17:12.645049   20224 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 20:17:12.916481   20224 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0915 20:17:12.916942   20224 kubeadm.go:600] restartCluster start
	I0915 20:17:12.949734   20224 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I0915 20:17:13.149187   20224 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:17:13.167839   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:17:14.072540   20224 kubeconfig.go:93] found "pause-20210915200708-22848" server: "https://127.0.0.1:57020"
	I0915 20:17:14.077214   20224 kapi.go:59] client config for pause-20210915200708-22848: &rest.Config{Host:"https://127.0.0.1:57020", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915200708-22848\\client.crt", KeyFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915200708-22848\\client.key", CAFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fd9780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 20:17:14.139839   20224 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 20:17:14.419739   20224 api_server.go:164] Checking apiserver status ...
	I0915 20:17:14.449368   20224 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:17:14.908242   20224 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/2098/cgroup
	I0915 20:17:15.091843   20224 api_server.go:180] apiserver freezer: "7:freezer:/docker/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/kubepods/burstable/pod1aaadc652fe103668d0cb72d535473c3/7ca5f8ec2a9bbaa0831dbd78d59ed4e4150de6afca7445b07cf861fb31fcce24"
	I0915 20:17:15.111116   20224 ssh_runner.go:152] Run: sudo cat /sys/fs/cgroup/freezer/docker/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/kubepods/burstable/pod1aaadc652fe103668d0cb72d535473c3/7ca5f8ec2a9bbaa0831dbd78d59ed4e4150de6afca7445b07cf861fb31fcce24/freezer.state
	I0915 20:17:15.378612   20224 api_server.go:202] freezer state: "THAWED"
	I0915 20:17:15.378612   20224 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57020/healthz ...
	I0915 20:17:15.441682   20224 api_server.go:265] https://127.0.0.1:57020/healthz returned 200:
	ok
	I0915 20:17:15.716877   20224 system_pods.go:86] 6 kube-system pods found
	I0915 20:17:15.717155   20224 system_pods.go:89] "coredns-78fcd69978-h8mkf" [e08be8b7-e007-4912-9cb8-cb40ccb84b65] Running
	I0915 20:17:15.717155   20224 system_pods.go:89] "etcd-pause-20210915200708-22848" [abe70d57-75fd-4f0d-81ed-6c4fc9a8a202] Running
	I0915 20:17:15.717155   20224 system_pods.go:89] "kube-apiserver-pause-20210915200708-22848" [e76ba76e-021a-46a0-b0c8-d496305c61ee] Running
	I0915 20:17:15.717155   20224 system_pods.go:89] "kube-controller-manager-pause-20210915200708-22848" [9d76ca6d-cbde-457e-8b70-f58bb75018a0] Running
	I0915 20:17:15.717155   20224 system_pods.go:89] "kube-proxy-4bkb5" [7900da64-de6b-4e95-bde7-74625ef16fda] Running
	I0915 20:17:15.717155   20224 system_pods.go:89] "kube-scheduler-pause-20210915200708-22848" [93dd1689-20a2-4901-bf23-01cbf22a2bad] Running
	I0915 20:17:15.724353   20224 api_server.go:139] control plane version: v1.22.1
	I0915 20:17:15.724488   20224 kubeadm.go:594] The running cluster does not require reconfiguration: 127.0.0.1
	I0915 20:17:15.724488   20224 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0915 20:17:15.724488   20224 kubeadm.go:604] restartCluster took 2.8075647s
	I0915 20:17:15.724488   20224 kubeadm.go:392] StartCluster complete in 4.5723993s
	I0915 20:17:15.724488   20224 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 20:17:15.725199   20224 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 20:17:15.727255   20224 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 20:17:15.761513   20224 kapi.go:59] client config for pause-20210915200708-22848: &rest.Config{Host:"https://127.0.0.1:57020", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915200708-22848\\client.crt", KeyFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915200708-22848\\client.key", CAFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fd9780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 20:17:15.843473   20224 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210915200708-22848" rescaled to 1
	I0915 20:17:15.844491   20224 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
	I0915 20:17:15.844491   20224 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 20:17:15.847481   20224 out.go:177] * Verifying Kubernetes components...
	I0915 20:17:15.844491   20224 addons.go:404] enableAddons start: toEnable=map[], additional=[]
	I0915 20:17:15.845473   20224 config.go:177] Loaded profile config "pause-20210915200708-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 20:17:15.847481   20224 addons.go:65] Setting storage-provisioner=true in profile "pause-20210915200708-22848"
	I0915 20:17:15.847481   20224 addons.go:65] Setting default-storageclass=true in profile "pause-20210915200708-22848"
	I0915 20:17:15.847481   20224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210915200708-22848"
	I0915 20:17:15.847481   20224 addons.go:153] Setting addon storage-provisioner=true in "pause-20210915200708-22848"
	W0915 20:17:15.848483   20224 addons.go:165] addon storage-provisioner should already be in state true
	I0915 20:17:15.848483   20224 host.go:66] Checking if "pause-20210915200708-22848" exists ...
	I0915 20:17:15.866523   20224 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 20:17:15.869778   20224 cli_runner.go:115] Run: docker container inspect pause-20210915200708-22848 --format={{.State.Status}}
	I0915 20:17:15.869778   20224 cli_runner.go:115] Run: docker container inspect pause-20210915200708-22848 --format={{.State.Status}}
	I0915 20:17:13.397961   24676 logs.go:123] Gathering logs for kubelet ...
	I0915 20:17:13.398172   24676 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 20:17:14.205955   24676 logs.go:123] Gathering logs for dmesg ...
	I0915 20:17:14.205955   24676 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 20:17:14.909545   24676 logs.go:123] Gathering logs for etcd [f66ec49c3f94] ...
	I0915 20:17:14.909545   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 f66ec49c3f94"
	I0915 20:17:16.771068   20224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 20:17:16.771587   20224 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 20:17:16.771810   20224 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 20:17:16.788138   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:17:16.825885   20224 kapi.go:59] client config for pause-20210915200708-22848: &rest.Config{Host:"https://127.0.0.1:57020", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915200708-22848\\client.crt", KeyFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\profiles\\pause-20210915200708-22848\\client.key", CAFile:"C:\\Users\\jenkins\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fd9780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 20:17:16.944763   20224 addons.go:153] Setting addon default-storageclass=true in "pause-20210915200708-22848"
	W0915 20:17:16.944763   20224 addons.go:165] addon default-storageclass should already be in state true
	I0915 20:17:16.944763   20224 host.go:66] Checking if "pause-20210915200708-22848" exists ...
	I0915 20:17:16.981646   20224 cli_runner.go:115] Run: docker container inspect pause-20210915200708-22848 --format={{.State.Status}}
	I0915 20:17:17.736083   20224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57016 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210915200708-22848\id_rsa Username:docker}
	I0915 20:17:17.740083   20224 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 20:17:17.740083   20224 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 20:17:17.752090   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:17:18.638148   20224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57016 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\pause-20210915200708-22848\id_rsa Username:docker}
	I0915 20:17:20.636790   24676 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 f66ec49c3f94": (5.7272814s)
	I0915 20:17:20.728934   24676 logs.go:123] Gathering logs for kube-proxy [b530d1d9d2dc] ...
	I0915 20:17:20.728934   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b530d1d9d2dc"
	I0915 20:17:21.583602   20224 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (5.7391479s)
	I0915 20:17:21.584102   20224 start.go:709] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0915 20:17:21.587372   20224 ssh_runner.go:192] Completed: sudo systemctl is-active --quiet service kubelet: (5.7171157s)
	I0915 20:17:21.613936   20224 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20210915200708-22848
	I0915 20:17:22.366027   20224 node_ready.go:35] waiting up to 6m0s for node "pause-20210915200708-22848" to be "Ready" ...
	I0915 20:17:22.455340   20224 node_ready.go:49] node "pause-20210915200708-22848" has status "Ready":"True"
	I0915 20:17:22.455340   20224 node_ready.go:38] duration metric: took 89.1125ms waiting for node "pause-20210915200708-22848" to be "Ready" ...
	I0915 20:17:22.455340   20224 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 20:17:22.731009   20224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-h8mkf" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:22.971282   20224 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 20:17:24.152963   20224 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 20:17:23.576317   24676 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 b530d1d9d2dc": (2.8474015s)
	I0915 20:17:23.578469   24676 logs.go:123] Gathering logs for kube-controller-manager [c2f7bdd669ac] ...
	I0915 20:17:23.578613   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 c2f7bdd669ac"
	I0915 20:17:27.632719   24676 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 c2f7bdd669ac": (4.0541313s)
	I0915 20:17:25.309712   20224 pod_ready.go:92] pod "coredns-78fcd69978-h8mkf" in "kube-system" namespace has status "Ready":"True"
	I0915 20:17:25.309712   20224 pod_ready.go:81] duration metric: took 2.5787198s waiting for pod "coredns-78fcd69978-h8mkf" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:25.309712   20224 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:25.421395   20224 pod_ready.go:92] pod "etcd-pause-20210915200708-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:17:25.421395   20224 pod_ready.go:81] duration metric: took 111.6838ms waiting for pod "etcd-pause-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:25.421395   20224 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:25.584570   20224 pod_ready.go:92] pod "kube-apiserver-pause-20210915200708-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:17:25.584755   20224 pod_ready.go:81] duration metric: took 163.3607ms waiting for pod "kube-apiserver-pause-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:25.584929   20224 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:25.648045   20224 pod_ready.go:92] pod "kube-controller-manager-pause-20210915200708-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:17:25.648045   20224 pod_ready.go:81] duration metric: took 63.1163ms waiting for pod "kube-controller-manager-pause-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:25.648385   20224 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4bkb5" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:25.854419   20224 pod_ready.go:92] pod "kube-proxy-4bkb5" in "kube-system" namespace has status "Ready":"True"
	I0915 20:17:25.854549   20224 pod_ready.go:81] duration metric: took 206.1654ms waiting for pod "kube-proxy-4bkb5" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:25.854549   20224 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:26.108760   20224 pod_ready.go:92] pod "kube-scheduler-pause-20210915200708-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:17:26.108760   20224 pod_ready.go:81] duration metric: took 254.2128ms waiting for pod "kube-scheduler-pause-20210915200708-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:17:26.108760   20224 pod_ready.go:38] duration metric: took 3.653444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 20:17:26.108760   20224 api_server.go:50] waiting for apiserver process to appear ...
	I0915 20:17:26.132567   20224 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:17:32.130163   20224 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.9772516s)
	I0915 20:17:32.130603   20224 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.9977679s)
	I0915 20:17:32.130603   20224 api_server.go:70] duration metric: took 16.2862165s to wait for apiserver process to appear ...
	I0915 20:17:32.130795   20224 api_server.go:86] waiting for apiserver healthz status ...
	I0915 20:17:32.130795   20224 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57020/healthz ...
	I0915 20:17:32.130603   20224 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.1589394s)
	I0915 20:17:32.136538   20224 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0915 20:17:32.137918   20224 addons.go:406] enableAddons completed in 16.2935314s
	I0915 20:17:32.244458   20224 api_server.go:265] https://127.0.0.1:57020/healthz returned 200:
	ok
	I0915 20:17:32.253264   20224 api_server.go:139] control plane version: v1.22.1
	I0915 20:17:32.253264   20224 api_server.go:129] duration metric: took 122.4696ms to wait for apiserver health ...
	I0915 20:17:32.253516   20224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 20:17:32.342299   20224 system_pods.go:59] 7 kube-system pods found
	I0915 20:17:32.342299   20224 system_pods.go:61] "coredns-78fcd69978-h8mkf" [e08be8b7-e007-4912-9cb8-cb40ccb84b65] Running
	I0915 20:17:32.342299   20224 system_pods.go:61] "etcd-pause-20210915200708-22848" [abe70d57-75fd-4f0d-81ed-6c4fc9a8a202] Running
	I0915 20:17:32.342299   20224 system_pods.go:61] "kube-apiserver-pause-20210915200708-22848" [e76ba76e-021a-46a0-b0c8-d496305c61ee] Running
	I0915 20:17:32.342299   20224 system_pods.go:61] "kube-controller-manager-pause-20210915200708-22848" [9d76ca6d-cbde-457e-8b70-f58bb75018a0] Running
	I0915 20:17:32.342299   20224 system_pods.go:61] "kube-proxy-4bkb5" [7900da64-de6b-4e95-bde7-74625ef16fda] Running
	I0915 20:17:32.342299   20224 system_pods.go:61] "kube-scheduler-pause-20210915200708-22848" [93dd1689-20a2-4901-bf23-01cbf22a2bad] Running
	I0915 20:17:32.342299   20224 system_pods.go:61] "storage-provisioner" [4ddb3569-9dc0-4624-b844-26b3a7726c3d] Pending
	I0915 20:17:32.342299   20224 system_pods.go:74] duration metric: took 88.784ms to wait for pod list to return data ...
	I0915 20:17:32.342299   20224 default_sa.go:34] waiting for default service account to be created ...
	I0915 20:17:32.358112   20224 default_sa.go:45] found service account: "default"
	I0915 20:17:32.358112   20224 default_sa.go:55] duration metric: took 15.8125ms for default service account to be created ...
	I0915 20:17:32.358112   20224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 20:17:32.427369   20224 system_pods.go:86] 7 kube-system pods found
	I0915 20:17:32.427492   20224 system_pods.go:89] "coredns-78fcd69978-h8mkf" [e08be8b7-e007-4912-9cb8-cb40ccb84b65] Running
	I0915 20:17:32.427492   20224 system_pods.go:89] "etcd-pause-20210915200708-22848" [abe70d57-75fd-4f0d-81ed-6c4fc9a8a202] Running
	I0915 20:17:32.427492   20224 system_pods.go:89] "kube-apiserver-pause-20210915200708-22848" [e76ba76e-021a-46a0-b0c8-d496305c61ee] Running
	I0915 20:17:32.427492   20224 system_pods.go:89] "kube-controller-manager-pause-20210915200708-22848" [9d76ca6d-cbde-457e-8b70-f58bb75018a0] Running
	I0915 20:17:32.427492   20224 system_pods.go:89] "kube-proxy-4bkb5" [7900da64-de6b-4e95-bde7-74625ef16fda] Running
	I0915 20:17:32.427492   20224 system_pods.go:89] "kube-scheduler-pause-20210915200708-22848" [93dd1689-20a2-4901-bf23-01cbf22a2bad] Running
	I0915 20:17:32.427492   20224 system_pods.go:89] "storage-provisioner" [4ddb3569-9dc0-4624-b844-26b3a7726c3d] Pending
	I0915 20:17:32.427492   20224 system_pods.go:126] duration metric: took 69.3803ms to wait for k8s-apps to be running ...
	I0915 20:17:32.427628   20224 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 20:17:32.441581   20224 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 20:17:32.787088   20224 system_svc.go:56] duration metric: took 359.4627ms WaitForService to wait for kubelet.
	I0915 20:17:32.787088   20224 kubeadm.go:547] duration metric: took 16.942706s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0915 20:17:32.787088   20224 node_conditions.go:102] verifying NodePressure condition ...
	I0915 20:17:32.834564   20224 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 20:17:32.834666   20224 node_conditions.go:123] node cpu capacity is 4
	I0915 20:17:32.834666   20224 node_conditions.go:105] duration metric: took 47.5784ms to run NodePressure ...
	I0915 20:17:32.834666   20224 start.go:231] waiting for startup goroutines ...
	I0915 20:17:33.045457   20224 start.go:462] kubectl: 1.20.0, cluster: 1.22.1 (minor skew: 2)
	I0915 20:17:33.048536   20224 out.go:177] 
	W0915 20:17:33.049217   20224 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1.
	I0915 20:17:33.052164   20224 out.go:177]   - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
	I0915 20:17:33.059447   20224 out.go:177] * Done! kubectl is now configured to use "pause-20210915200708-22848" cluster and "default" namespace by default
	I0915 20:17:30.174404   24676 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:17:30.443213   24676 api_server.go:70] duration metric: took 2m27.9708313s to wait for apiserver process to appear ...
	I0915 20:17:30.443405   24676 api_server.go:86] waiting for apiserver healthz status ...
	I0915 20:17:30.464684   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 20:17:31.108883   24676 logs.go:270] 1 containers: [cade9f9b4edd]
	I0915 20:17:31.126339   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 20:17:31.422714   24676 logs.go:270] 1 containers: [f66ec49c3f94]
	I0915 20:17:31.445230   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 20:17:31.839505   24676 logs.go:270] 1 containers: [5cbf0d0f1d37]
	I0915 20:17:31.855431   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 20:17:32.608143   24676 logs.go:270] 1 containers: [1d5441743b86]
	I0915 20:17:32.628602   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 20:17:33.408986   24676 logs.go:270] 1 containers: [b530d1d9d2dc]
	I0915 20:17:33.419603   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 20:17:33.955049   24676 logs.go:270] 0 containers: []
	W0915 20:17:33.955404   24676 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0915 20:17:33.984134   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 20:17:34.840399   24676 logs.go:270] 1 containers: [fb131ec39e15]
	I0915 20:17:34.859231   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 20:17:36.342021   24676 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (1.4827987s)
	I0915 20:17:36.342021   24676 logs.go:270] 2 containers: [14ce8c55a5a9 c2f7bdd669ac]
	I0915 20:17:36.342463   24676 logs.go:123] Gathering logs for kubelet ...
	I0915 20:17:36.342463   24676 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 20:17:37.309543   24676 logs.go:123] Gathering logs for kube-apiserver [cade9f9b4edd] ...
	I0915 20:17:37.309543   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 cade9f9b4edd"
	I0915 20:17:39.220953   24676 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 cade9f9b4edd": (1.9112393s)
	I0915 20:17:39.250584   24676 logs.go:123] Gathering logs for etcd [f66ec49c3f94] ...
	I0915 20:17:39.250584   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 f66ec49c3f94"
	I0915 20:17:41.197745   24676 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 f66ec49c3f94": (1.9471741s)
	I0915 20:17:41.286761   24676 logs.go:123] Gathering logs for kube-scheduler [1d5441743b86] ...
	I0915 20:17:41.286761   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 1d5441743b86"
	I0915 20:17:42.077713   24676 logs.go:123] Gathering logs for kube-controller-manager [14ce8c55a5a9] ...
	I0915 20:17:42.077713   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 14ce8c55a5a9"
	I0915 20:17:42.764959   24676 logs.go:123] Gathering logs for kube-controller-manager [c2f7bdd669ac] ...
	I0915 20:17:42.764959   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 c2f7bdd669ac"
	I0915 20:17:43.182962   24676 logs.go:123] Gathering logs for Docker ...
	I0915 20:17:43.182962   24676 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 20:17:43.348719   24676 logs.go:123] Gathering logs for dmesg ...
	I0915 20:17:43.348719   24676 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 20:17:43.512551   24676 logs.go:123] Gathering logs for describe nodes ...
	I0915 20:17:43.512551   24676 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 20:17:44.530482   24676 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1.0179373s)
	I0915 20:17:44.534831   24676 logs.go:123] Gathering logs for coredns [5cbf0d0f1d37] ...
	I0915 20:17:44.534831   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5cbf0d0f1d37"
	I0915 20:17:45.091357   24676 logs.go:123] Gathering logs for kube-proxy [b530d1d9d2dc] ...
	I0915 20:17:45.091490   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b530d1d9d2dc"
	I0915 20:17:45.722315   24676 logs.go:123] Gathering logs for storage-provisioner [fb131ec39e15] ...
	I0915 20:17:45.727402   24676 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 fb131ec39e15"
	I0915 20:17:46.454238   24676 logs.go:123] Gathering logs for container status ...
	I0915 20:17:46.454238   24676 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 20:17:49.610857   24676 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57024/healthz ...
	I0915 20:17:49.736518   24676 api_server.go:265] https://127.0.0.1:57024/healthz returned 200:
	ok
	I0915 20:17:49.748177   24676 api_server.go:139] control plane version: v1.22.1
	I0915 20:17:49.748177   24676 api_server.go:129] duration metric: took 19.3048961s to wait for apiserver health ...
	I0915 20:17:49.748177   24676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 20:17:49.765893   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 20:17:50.250324   24676 logs.go:270] 1 containers: [cade9f9b4edd]
	I0915 20:17:50.264148   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 20:17:50.517174   24676 logs.go:270] 1 containers: [f66ec49c3f94]
	I0915 20:17:50.532410   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 20:17:50.899866   24676 logs.go:270] 1 containers: [5cbf0d0f1d37]
	I0915 20:17:50.913695   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 20:17:51.331168   24676 logs.go:270] 1 containers: [1d5441743b86]
	I0915 20:17:51.357487   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 20:17:51.879167   24676 logs.go:270] 1 containers: [b530d1d9d2dc]
	I0915 20:17:51.895654   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 20:17:52.632696   24676 logs.go:270] 0 containers: []
	W0915 20:17:52.632879   24676 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0915 20:17:52.650344   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 20:17:52.988005   24676 logs.go:270] 1 containers: [fb131ec39e15]
	I0915 20:17:53.002070   24676 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-09-15 20:07:40 UTC, end at Wed 2021-09-15 20:18:02 UTC. --
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[467]: time="2021-09-15T20:12:01.275345500Z" level=info msg="Processing signal 'terminated'"
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[467]: time="2021-09-15T20:12:01.303410400Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[467]: time="2021-09-15T20:12:01.316224500Z" level=info msg="Daemon shutdown complete"
	Sep 15 20:12:01 pause-20210915200708-22848 systemd[1]: docker.service: Succeeded.
	Sep 15 20:12:01 pause-20210915200708-22848 systemd[1]: Stopped Docker Application Container Engine.
	Sep 15 20:12:01 pause-20210915200708-22848 systemd[1]: Starting Docker Application Container Engine...
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.771010200Z" level=info msg="Starting up"
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.798008400Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.798229900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.798282300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.798369700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.821076400Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.821168100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.821214400Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.821256300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.898374300Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Sep 15 20:12:01 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:01.973858600Z" level=info msg="Loading containers: start."
	Sep 15 20:12:02 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:02.800123100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 15 20:12:03 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:03.214102300Z" level=info msg="Loading containers: done."
	Sep 15 20:12:03 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:03.406720900Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
	Sep 15 20:12:03 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:03.407167900Z" level=info msg="Daemon has completed initialization"
	Sep 15 20:12:03 pause-20210915200708-22848 systemd[1]: Started Docker Application Container Engine.
	Sep 15 20:12:03 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:03.786697000Z" level=info msg="API listen on [::]:2376"
	Sep 15 20:12:03 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:12:03.864662300Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 15 20:13:52 pause-20210915200708-22848 dockerd[777]: time="2021-09-15T20:13:52.695049600Z" level=info msg="ignoring event" container=9e07cecd07fa8cf54fbdaab34b3b86f5c8fb109db4c40d1fb90741e5b2ac25f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* time="2021-09-15T20:18:04Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS                       PORTS     NAMES
	77c86029f941   6e38f40d628d           "/storage-provisioner"   26 seconds ago   Created                                k8s_storage-provisioner_storage-provisioner_kube-system_4ddb3569-9dc0-4624-b844-26b3a7726c3d_0
	db34b7dfcd04   k8s.gcr.io/pause:3.5   "/pause"                 32 seconds ago   Up 26 seconds (Paused)                 k8s_POD_storage-provisioner_kube-system_4ddb3569-9dc0-4624-b844-26b3a7726c3d_0
	a26e25f48fbd   8d147537fb7d           "/coredns -conf /etc…"   2 minutes ago    Up 2 minutes (Paused)                  k8s_coredns_coredns-78fcd69978-h8mkf_kube-system_e08be8b7-e007-4912-9cb8-cb40ccb84b65_0
	27e227119dcc   36c4ebbc9d97           "/usr/local/bin/kube…"   2 minutes ago    Up 2 minutes (Paused)                  k8s_kube-proxy_kube-proxy-4bkb5_kube-system_7900da64-de6b-4e95-bde7-74625ef16fda_0
	cb459445a096   k8s.gcr.io/pause:3.5   "/pause"                 2 minutes ago    Up 2 minutes (Paused)                  k8s_POD_coredns-78fcd69978-h8mkf_kube-system_e08be8b7-e007-4912-9cb8-cb40ccb84b65_0
	3bc29bae3d46   k8s.gcr.io/pause:3.5   "/pause"                 2 minutes ago    Up 2 minutes (Paused)                  k8s_POD_kube-proxy-4bkb5_kube-system_7900da64-de6b-4e95-bde7-74625ef16fda_0
	94c8caabe365   6e002eb89a88           "kube-controller-man…"   4 minutes ago    Up 3 minutes (Paused)                  k8s_kube-controller-manager_kube-controller-manager-pause-20210915200708-22848_kube-system_b51f7cd05085dedfe3da50ebbdfbf546_1
	765e5d7822ad   aca5ededae9c           "kube-scheduler --au…"   5 minutes ago    Up 5 minutes (Paused)                  k8s_kube-scheduler_kube-scheduler-pause-20210915200708-22848_kube-system_874e3c4aee88fb965c5fb7cfeb545dc6_0
	9e07cecd07fa   6e002eb89a88           "kube-controller-man…"   5 minutes ago    Exited (255) 4 minutes ago             k8s_kube-controller-manager_kube-controller-manager-pause-20210915200708-22848_kube-system_b51f7cd05085dedfe3da50ebbdfbf546_0
	7ca5f8ec2a9b   f30469a2491a           "kube-apiserver --ad…"   5 minutes ago    Up 5 minutes (Paused)                  k8s_kube-apiserver_kube-apiserver-pause-20210915200708-22848_kube-system_1aaadc652fe103668d0cb72d535473c3_0
	3a85cc5b65c8   004811815584           "etcd --advertise-cl…"   5 minutes ago    Up 5 minutes (Paused)                  k8s_etcd_etcd-pause-20210915200708-22848_kube-system_ecba0eb40ecad14d0249b9856054ecbb_0
	edf3e6c66910   k8s.gcr.io/pause:3.5   "/pause"                 5 minutes ago    Up 5 minutes (Paused)                  k8s_POD_kube-scheduler-pause-20210915200708-22848_kube-system_874e3c4aee88fb965c5fb7cfeb545dc6_0
	265ead2ebbac   k8s.gcr.io/pause:3.5   "/pause"                 5 minutes ago    Up 5 minutes (Paused)                  k8s_POD_kube-controller-manager-pause-20210915200708-22848_kube-system_b51f7cd05085dedfe3da50ebbdfbf546_0
	fcaababa1a2c   k8s.gcr.io/pause:3.5   "/pause"                 5 minutes ago    Up 5 minutes (Paused)                  k8s_POD_kube-apiserver-pause-20210915200708-22848_kube-system_1aaadc652fe103668d0cb72d535473c3_0
	1e8280857e04   k8s.gcr.io/pause:3.5   "/pause"                 5 minutes ago    Up 5 minutes (Paused)                  k8s_POD_etcd-pause-20210915200708-22848_kube-system_ecba0eb40ecad14d0249b9856054ecbb_0
	
	* 
	* ==> coredns [a26e25f48fbd] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000000]  hrtimer_interrupt+0x92/0x165
	[  +0.000000]  hv_stimer0_isr+0x20/0x2d
	[  +0.000000]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000000]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000000]  </IRQ>
	[  +0.000000] RIP: 0010:arch_local_irq_enable+0x7/0x8
	[  +0.000000] Code: ef ff ff 0f 20 d8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 fb 66 0f 1f 44 00 00 <c3> 0f 1f 44 00 00 40 f6 c7 02 74 12 48 b8 ff 0f 00 00 00 00 f0 ff
	[  +0.000000] RSP: 0000:ffffbcaf423f7ee0 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff12
	[  +0.000000] RAX: 0000000080000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000000] RDX: 000055a9735499db RSI: 0000000000000004 RDI: ffffbcaf423f7f58
	[  +0.000000] RBP: ffffbcaf423f7f58 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000004
	[  +0.000000] R13: 000055a9735499db R14: ffff97d483b18dc0 R15: ffff97d4e4dc7400
	[  +0.000000]  __do_page_fault+0x17f/0x42d
	[  +0.000000]  ? page_fault+0x8/0x30
	[  +0.000000]  page_fault+0x1e/0x30
	[  +0.000000] RIP: 0033:0x55a9730c8f03
	[  +0.000000] Code: 0f 6f d9 66 0f ef 0d ec 85 97 00 66 0f ef 15 f4 85 97 00 66 0f ef 1d fc 85 97 00 66 0f 38 dc c9 66 0f 38 dc d2 66 0f 38 dc db <f3> 0f 6f 20 f3 0f 6f 68 10 f3 0f 6f 74 08 e0 f3 0f 6f 7c 08 f0 66
	[  +0.000000] RSP: 002b:000000c00004bdc8 EFLAGS: 00010287
	[  +0.000000] RAX: 000055a9735499db RBX: 000055a9730cb860 RCX: 0000000000000022
	[  +0.000000] RDX: 000000c00004bde0 RSI: 000000c00004be48 RDI: 000000c000080868
	[  +0.000000] RBP: 000000c00004be28 R08: 000055a97353d681 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000004 R11: 000000c0000807d0 R12: 000000000000001a
	[  +0.000000] R13: 0000000000000006 R14: 0000000000000008 R15: 0000000000000017
	[  +0.000000] ---[ end trace cdbbbbc925f6eff0 ]---
	
	* 
	* ==> etcd [3a85cc5b65c8] <==
	* {"level":"warn","ts":"2021-09-15T20:15:43.749Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"334.1241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-20210915200708-22848\" ","response":"range_response_count:1 size:4683"}
	{"level":"info","ts":"2021-09-15T20:15:43.750Z","caller":"traceutil/trace.go:171","msg":"trace[2054732998] range","detail":"{range_begin:/registry/minions/pause-20210915200708-22848; range_end:; response_count:1; response_revision:488; }","duration":"334.3492ms","start":"2021-09-15T20:15:43.415Z","end":"2021-09-15T20:15:43.750Z","steps":["trace[2054732998] 'agreement among raft nodes before linearized reading'  (duration: 333.9656ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T20:15:43.750Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T20:15:43.415Z","time spent":"334.4815ms","remote":"127.0.0.1:58354","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":4707,"request content":"key:\"/registry/minions/pause-20210915200708-22848\" "}
	{"level":"warn","ts":"2021-09-15T20:15:43.760Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"171.2081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-20210915200708-22848\" ","response":"range_response_count:1 size:4683"}
	{"level":"info","ts":"2021-09-15T20:15:43.761Z","caller":"traceutil/trace.go:171","msg":"trace[1596706449] range","detail":"{range_begin:/registry/minions/pause-20210915200708-22848; range_end:; response_count:1; response_revision:488; }","duration":"171.4106ms","start":"2021-09-15T20:15:43.589Z","end":"2021-09-15T20:15:43.761Z","steps":["trace[1596706449] 'agreement among raft nodes before linearized reading'  (duration: 171.0465ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T20:15:43.774Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"171.3364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T20:15:43.775Z","caller":"traceutil/trace.go:171","msg":"trace[1509709782] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:488; }","duration":"171.6343ms","start":"2021-09-15T20:15:43.603Z","end":"2021-09-15T20:15:43.775Z","steps":["trace[1509709782] 'agreement among raft nodes before linearized reading'  (duration: 171.3046ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T20:15:43.776Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"163.3443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-4bkb5\" ","response":"range_response_count:1 size:4438"}
	{"level":"info","ts":"2021-09-15T20:15:43.776Z","caller":"traceutil/trace.go:171","msg":"trace[1651047813] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-4bkb5; range_end:; response_count:1; response_revision:488; }","duration":"163.4029ms","start":"2021-09-15T20:15:43.613Z","end":"2021-09-15T20:15:43.776Z","steps":["trace[1651047813] 'agreement among raft nodes before linearized reading'  (duration: 148.5554ms)","trace[1651047813] 'range keys from bolt db'  (duration: 14.6058ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T20:15:56.511Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.5966ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128007677133368741 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:452 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3956 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-09-15T20:15:56.512Z","caller":"traceutil/trace.go:171","msg":"trace[1847483393] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"109.1421ms","start":"2021-09-15T20:15:56.403Z","end":"2021-09-15T20:15:56.512Z","steps":["trace[1847483393] 'compare'  (duration: 106.4227ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T20:16:02.758Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"289.4031ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128007677133368797 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:493 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128007677133368795 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-09-15T20:16:02.758Z","caller":"traceutil/trace.go:171","msg":"trace[2031387987] linearizableReadLoop","detail":"{readStateIndex:534; appliedIndex:533; }","duration":"239.1857ms","start":"2021-09-15T20:16:02.519Z","end":"2021-09-15T20:16:02.758Z","steps":["trace[2031387987] 'read index received'  (duration: 101.917ms)","trace[2031387987] 'applied index is now lower than readState.Index'  (duration: 137.267ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T20:16:02.758Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"239.3319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T20:16:02.758Z","caller":"traceutil/trace.go:171","msg":"trace[1614893085] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:505; }","duration":"239.408ms","start":"2021-09-15T20:16:02.519Z","end":"2021-09-15T20:16:02.758Z","steps":["trace[1614893085] 'agreement among raft nodes before linearized reading'  (duration: 239.307ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T20:16:02.761Z","caller":"traceutil/trace.go:171","msg":"trace[1607560618] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"441.8987ms","start":"2021-09-15T20:16:02.319Z","end":"2021-09-15T20:16:02.761Z","steps":["trace[1607560618] 'process raft request'  (duration: 148.9736ms)","trace[1607560618] 'compare'  (duration: 289.0426ms)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T20:16:02.761Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T20:16:02.319Z","time spent":"441.9975ms","remote":"127.0.0.1:58312","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:493 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128007677133368795 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >"}
	{"level":"warn","ts":"2021-09-15T20:16:03.261Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"195.1145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2021-09-15T20:16:03.262Z","caller":"traceutil/trace.go:171","msg":"trace[322300020] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:505; }","duration":"195.6457ms","start":"2021-09-15T20:16:03.066Z","end":"2021-09-15T20:16:03.262Z","steps":["trace[322300020] 'count revisions from in-memory index tree'  (duration: 194.6044ms)"],"step_count":1}
	{"level":"info","ts":"2021-09-15T20:17:24.783Z","caller":"traceutil/trace.go:171","msg":"trace[382228905] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:565; }","duration":"313.9663ms","start":"2021-09-15T20:17:24.469Z","end":"2021-09-15T20:17:24.783Z","steps":["trace[382228905] 'read index received'  (duration: 313.9583ms)","trace[382228905] 'applied index is now lower than readState.Index'  (duration: 6.4µs)"],"step_count":2}
	{"level":"warn","ts":"2021-09-15T20:17:25.106Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"636.8364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-20210915200708-22848\" ","response":"range_response_count:1 size:4683"}
	{"level":"info","ts":"2021-09-15T20:17:25.114Z","caller":"traceutil/trace.go:171","msg":"trace[71132380] range","detail":"{range_begin:/registry/minions/pause-20210915200708-22848; range_end:; response_count:1; response_revision:521; }","duration":"644.6625ms","start":"2021-09-15T20:17:24.469Z","end":"2021-09-15T20:17:25.114Z","steps":["trace[71132380] 'agreement among raft nodes before linearized reading'  (duration: 336.2171ms)","trace[71132380] 'get authentication metadata'  (duration: 90.1632ms)","trace[71132380] 'range keys from in-memory index tree'  (duration: 210.374ms)"],"step_count":3}
	{"level":"warn","ts":"2021-09-15T20:17:25.111Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"145.0015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-09-15T20:17:25.175Z","caller":"traceutil/trace.go:171","msg":"trace[874828865] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:521; }","duration":"209.0398ms","start":"2021-09-15T20:17:24.966Z","end":"2021-09-15T20:17:25.175Z","steps":["trace[874828865] 'range keys from in-memory index tree'  (duration: 136.8416ms)"],"step_count":1}
	{"level":"warn","ts":"2021-09-15T20:17:25.200Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-09-15T20:17:24.469Z","time spent":"731.0327ms","remote":"127.0.0.1:58354","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":4707,"request content":"key:\"/registry/minions/pause-20210915200708-22848\" "}
	
	* 
	* ==> kernel <==
	*  20:18:17 up  1:53,  0 users,  load average: 29.24, 24.79, 14.23
	Linux pause-20210915200708-22848 4.19.121-linuxkit #1 SMP Thu Jan 21 15:36:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [7ca5f8ec2a9b] <==
	* Trace[126170055]: ---"Transaction committed" 250ms (20:14:56.244)
	Trace[126170055]: [655.822ms] [655.822ms] END
	I0915 20:14:56.260722       1 trace.go:205] Trace[787499209]: "Patch" url:/api/v1/namespaces/kube-system/pods/coredns-78fcd69978-q672q/status,user-agent:kube-scheduler/v1.22.1 (linux/amd64) kubernetes/632ed30/scheduler,audit-id:e8ad9745-5031-43ae-95a7-0d40bbc2fceb,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (15-Sep-2021 20:14:55.588) (total time: 667ms):
	Trace[787499209]: ---"About to check admission control" 382ms (20:14:55.974)
	Trace[787499209]: ---"Object stored in database" 270ms (20:14:56.244)
	Trace[787499209]: [667.3915ms] [667.3915ms] END
	I0915 20:15:18.589843       1 trace.go:205] Trace[1562375959]: "Get" url:/api/v1/nodes/pause-20210915200708-22848,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:deae9e9b-6e77-44a7-82ec-e8612f7db33a,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (15-Sep-2021 20:15:17.595) (total time: 994ms):
	Trace[1562375959]: ---"About to write a response" 991ms (20:15:18.586)
	Trace[1562375959]: [994.4456ms] [994.4456ms] END
	I0915 20:15:18.732580       1 trace.go:205] Trace[816836745]: "Patch" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20210915200708-22848/status,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:7cce6bcb-de69-4519-8daf-a1697c981009,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 20:15:18.192) (total time: 539ms):
	Trace[816836745]: ---"Recorded the audit event" 225ms (20:15:18.418)
	Trace[816836745]: ---"Object stored in database" 261ms (20:15:18.716)
	Trace[816836745]: [539.8643ms] [539.8643ms] END
	I0915 20:15:19.566687       1 trace.go:205] Trace[345276942]: "Get" url:/api/v1/namespaces/kube-system/pods/etcd-pause-20210915200708-22848,user-agent:kubelet/v1.22.1 (linux/amd64) kubernetes/632ed30,audit-id:458df39d-6555-46c9-906e-08052685d6d5,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (15-Sep-2021 20:15:19.006) (total time: 560ms):
	Trace[345276942]: ---"About to write a response" 559ms (20:15:19.565)
	Trace[345276942]: [560.2141ms] [560.2141ms] END
	I0915 20:15:19.592606       1 trace.go:205] Trace[1986093569]: "Get" url:/api/v1/nodes/pause-20210915200708-22848,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:d6b0f253-3471-46ef-9f9b-36cb3819904c,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (15-Sep-2021 20:15:19.076) (total time: 516ms):
	Trace[1986093569]: ---"About to write a response" 484ms (20:15:19.561)
	Trace[1986093569]: [516.0118ms] [516.0118ms] END
	I0915 20:15:43.832062       1 trace.go:205] Trace[369660822]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (15-Sep-2021 20:15:43.320) (total time: 511ms):
	Trace[369660822]: ---"Transaction committed" 486ms (20:15:43.831)
	Trace[369660822]: [511.4162ms] [511.4162ms] END
	I0915 20:17:25.256877       1 trace.go:205] Trace[1748360662]: "Get" url:/api/v1/nodes/pause-20210915200708-22848,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:110f9d66-f5cb-420c-b41d-eff4739821c3,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (15-Sep-2021 20:17:24.379) (total time: 876ms):
	Trace[1748360662]: ---"About to write a response" 872ms (20:17:25.252)
	Trace[1748360662]: [876.9963ms] [876.9963ms] END
	
	* 
	* ==> kube-controller-manager [94c8caabe365] <==
	* I0915 20:14:50.609348       1 shared_informer.go:247] Caches are synced for expand 
	I0915 20:14:50.654084       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0915 20:14:50.654388       1 shared_informer.go:247] Caches are synced for cronjob 
	I0915 20:14:50.707989       1 shared_informer.go:247] Caches are synced for taint 
	I0915 20:14:50.708206       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	I0915 20:14:50.708270       1 shared_informer.go:247] Caches are synced for resource quota 
	W0915 20:14:50.708408       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210915200708-22848. Assuming now as a timestamp.
	I0915 20:14:50.708519       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0915 20:14:50.708629       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0915 20:14:50.726336       1 shared_informer.go:247] Caches are synced for attach detach 
	I0915 20:14:50.761297       1 event.go:291] "Event occurred" object="pause-20210915200708-22848" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210915200708-22848 event: Registered Node pause-20210915200708-22848 in Controller"
	I0915 20:14:50.778299       1 shared_informer.go:247] Caches are synced for resource quota 
	I0915 20:14:50.971707       1 range_allocator.go:373] Set node pause-20210915200708-22848 PodCIDR to [10.244.0.0/24]
	I0915 20:14:51.514720       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20210915200708-22848" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0915 20:14:52.793451       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I0915 20:14:54.003644       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4bkb5"
	I0915 20:14:55.056413       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-q672q"
	I0915 20:14:55.918203       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0915 20:14:56.253481       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 20:14:56.264204       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-h8mkf"
	I0915 20:14:56.280750       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0915 20:14:56.323726       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0915 20:14:59.336446       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0915 20:14:59.924948       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-q672q"
	I0915 20:15:20.765714       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-controller-manager [9e07cecd07fa] <==
	* 	/usr/local/go/src/net/tcpsock_posix.go:139 +0x32
	net.(*TCPListener).Accept(0xc00000dab8, 0x7f13f18f7b68, 0x10, 0x203000, 0x203000)
		/usr/local/go/src/net/tcpsock.go:261 +0x65
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.tcpKeepAliveListener.Accept(0x51d2508, 0xc00000dab8, 0xc001035d10, 0x7aa30649, 0x7f672c55fb14cec0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:331 +0x35
	crypto/tls.(*listener).Accept(0xc000114060, 0xc001035d90, 0x18, 0xc000549e00, 0x880c9b)
		/usr/local/go/src/crypto/tls/tls.go:67 +0x37
	net/http.(*Server).Serve(0xc00021f340, 0x51c17f8, 0xc000114060, 0x0, 0x0)
		/usr/local/go/src/net/http/server.go:2961 +0x285
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer.func2(0xc000095440, 0x51d2508, 0xc00000dab8, 0xc00021f340, 0xc0002182a0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:306 +0x11d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:296 +0xff
	
	goroutine 189 [runnable]:
	net/http.setRequestCancel.func4(0x0, 0xc000b9aea0, 0xc000092a00, 0xc000b9c7bc, 0xc0001289c0)
		/usr/local/go/src/net/http/client.go:397 +0x96
	created by net/http.setRequestCancel
		/usr/local/go/src/net/http/client.go:396 +0x337
	
	goroutine 217 [runnable]:
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientStream).awaitRequestCancel(0xc0008f2580, 0xc000bb6e00)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:343
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).handleResponse
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2056 +0x728
	
	* 
	* ==> kube-proxy [27e227119dcc] <==
	* I0915 20:15:44.000606       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0915 20:15:44.000758       1 server_others.go:140] Detected node IP 192.168.49.2
	W0915 20:15:44.000830       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I0915 20:15:45.937649       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0915 20:15:45.945655       1 server_others.go:212] Using iptables Proxier.
	I0915 20:15:45.957113       1 server_others.go:219] creating dualStackProxier for iptables.
	W0915 20:15:45.957166       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0915 20:15:45.966479       1 server.go:649] Version: v1.22.1
	I0915 20:15:45.973655       1 config.go:224] Starting endpoint slice config controller
	I0915 20:15:45.973743       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0915 20:15:45.989543       1 config.go:315] Starting service config controller
	I0915 20:15:45.989565       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0915 20:15:46.221615       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	E0915 20:15:46.236014       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-20210915200708-22848.16a5181b674fabb8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048b2fc79b46534, ext:6695650601, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-pause-20210915200708-22848", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-
20210915200708-22848", UID:"pause-20210915200708-22848", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "pause-20210915200708-22848.16a5181b674fabb8" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0915 20:15:46.298251       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [765e5d7822ad] <==
	* E0915 20:13:54.238998       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 20:13:54.297269       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 20:13:54.369285       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 20:13:54.380374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 20:13:54.405250       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 20:13:54.562284       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 20:13:54.790417       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 20:13:55.131720       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 20:13:57.900710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 20:13:58.097628       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 20:13:58.311803       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 20:13:58.445913       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 20:13:58.706779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 20:13:58.752370       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 20:13:58.879431       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 20:13:59.149017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 20:13:59.385396       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 20:13:59.394402       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 20:13:59.418134       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 20:13:59.486874       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 20:13:59.605182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 20:13:59.789310       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 20:13:59.996704       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 20:14:06.020221       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0915 20:14:24.431468       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-09-15 20:07:40 UTC, end at Wed 2021-09-15 20:18:19 UTC. --
	Sep 15 20:15:13 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:13.156004    2790 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 20:15:13 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:13.217397    2790 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7900da64-de6b-4e95-bde7-74625ef16fda-xtables-lock\") pod \"kube-proxy-4bkb5\" (UID: \"7900da64-de6b-4e95-bde7-74625ef16fda\") "
	Sep 15 20:15:13 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:13.217491    2790 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7900da64-de6b-4e95-bde7-74625ef16fda-lib-modules\") pod \"kube-proxy-4bkb5\" (UID: \"7900da64-de6b-4e95-bde7-74625ef16fda\") "
	Sep 15 20:15:13 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:13.217574    2790 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7900da64-de6b-4e95-bde7-74625ef16fda-kube-proxy\") pod \"kube-proxy-4bkb5\" (UID: \"7900da64-de6b-4e95-bde7-74625ef16fda\") "
	Sep 15 20:15:13 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:13.373062    2790 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4h48\" (UniqueName: \"kubernetes.io/projected/7900da64-de6b-4e95-bde7-74625ef16fda-kube-api-access-m4h48\") pod \"kube-proxy-4bkb5\" (UID: \"7900da64-de6b-4e95-bde7-74625ef16fda\") "
	Sep 15 20:15:16 pause-20210915200708-22848 kubelet[2790]: E0915 20:15:16.113305    2790 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"etcd-pause-20210915200708-22848\" already exists" pod="kube-system/etcd-pause-20210915200708-22848"
	Sep 15 20:15:20 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:20.315347    2790 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 20:15:20 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:20.362072    2790 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e08be8b7-e007-4912-9cb8-cb40ccb84b65-config-volume\") pod \"coredns-78fcd69978-h8mkf\" (UID: \"e08be8b7-e007-4912-9cb8-cb40ccb84b65\") "
	Sep 15 20:15:20 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:20.362151    2790 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr7fz\" (UniqueName: \"kubernetes.io/projected/e08be8b7-e007-4912-9cb8-cb40ccb84b65-kube-api-access-sr7fz\") pod \"coredns-78fcd69978-h8mkf\" (UID: \"e08be8b7-e007-4912-9cb8-cb40ccb84b65\") "
	Sep 15 20:15:25 pause-20210915200708-22848 kubelet[2790]: W0915 20:15:25.766783    2790 container.go:586] Failed to update stats for container "/kubepods/burstable/pode08be8b7-e007-4912-9cb8-cb40ccb84b65": /sys/fs/cgroup/cpuset/kubepods/burstable/pode08be8b7-e007-4912-9cb8-cb40ccb84b65/cpuset.cpus found to be empty, continuing to push stats
	Sep 15 20:15:31 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:31.172339    2790 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3bc29bae3d4616843b1d2c94298454ed2710971aeebb15d59b0a3d5fdb235edc"
	Sep 15 20:15:32 pause-20210915200708-22848 kubelet[2790]: E0915 20:15:32.729601    2790 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pode08be8b7-e007-4912-9cb8-cb40ccb84b65\": RecentStats: unable to find data in memory cache]"
	Sep 15 20:15:40 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:40.252853    2790 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="cb459445a096a95702fa5df13ffb2d759572c9caedca89e7c359d1d46fe5a548"
	Sep 15 20:15:40 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:40.267359    2790 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-h8mkf through plugin: invalid network status for"
	Sep 15 20:15:43 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:43.247778    2790 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-h8mkf through plugin: invalid network status for"
	Sep 15 20:15:45 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:45.058408    2790 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-h8mkf through plugin: invalid network status for"
	Sep 15 20:15:52 pause-20210915200708-22848 kubelet[2790]: I0915 20:15:52.319474    2790 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-h8mkf through plugin: invalid network status for"
	Sep 15 20:17:32 pause-20210915200708-22848 kubelet[2790]: I0915 20:17:32.129801    2790 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 20:17:32 pause-20210915200708-22848 kubelet[2790]: I0915 20:17:32.562262    2790 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccf6s\" (UniqueName: \"kubernetes.io/projected/4ddb3569-9dc0-4624-b844-26b3a7726c3d-kube-api-access-ccf6s\") pod \"storage-provisioner\" (UID: \"4ddb3569-9dc0-4624-b844-26b3a7726c3d\") "
	Sep 15 20:17:32 pause-20210915200708-22848 kubelet[2790]: I0915 20:17:32.562338    2790 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4ddb3569-9dc0-4624-b844-26b3a7726c3d-tmp\") pod \"storage-provisioner\" (UID: \"4ddb3569-9dc0-4624-b844-26b3a7726c3d\") "
	Sep 15 20:17:39 pause-20210915200708-22848 kubelet[2790]: I0915 20:17:39.351049    2790 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="db34b7dfcd048f09f5586b398a532e1b930c968e971fe216953136d72942b9d9"
	Sep 15 20:17:39 pause-20210915200708-22848 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Sep 15 20:17:39 pause-20210915200708-22848 kubelet[2790]: I0915 20:17:39.862419    2790 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Sep 15 20:17:40 pause-20210915200708-22848 systemd[1]: kubelet.service: Succeeded.
	Sep 15 20:17:40 pause-20210915200708-22848 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [77c86029f941] <==
	* 
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect pause-20210915200708-22848 --format={{.State.Status}}" took an unusually long time: 2.1535922s
	* Restarting the docker service may improve performance.
	E0915 20:18:16.114901   41592 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/VerifyStatus (37.35s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (218.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-20210915200708-22848 --alsologtostderr -v=5
E0915 20:18:58.759050   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:130: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p pause-20210915200708-22848 --alsologtostderr -v=5: exit status 1 (3m24.9004513s)

                                                
                                                
-- stdout --
	* Deleting "pause-20210915200708-22848" in docker ...
	* Deleting container "pause-20210915200708-22848" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 20:18:43.684537   87136 out.go:298] Setting OutFile to fd 2592 ...
	I0915 20:18:43.687464   87136 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 20:18:43.687464   87136 out.go:311] Setting ErrFile to fd 2516...
	I0915 20:18:43.687464   87136 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 20:18:43.713465   87136 cli_runner.go:115] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
	I0915 20:18:46.016694   87136 cli_runner.go:168] Completed: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}: (2.3032431s)
	I0915 20:18:46.018695   87136 config.go:177] Loaded profile config "force-systemd-flag-20210915201833-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 20:18:46.018695   87136 config.go:177] Loaded profile config "pause-20210915200708-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 20:18:46.019698   87136 config.go:177] Loaded profile config "running-upgrade-20210915200708-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0915 20:18:46.019698   87136 config.go:177] Loaded profile config "stopped-upgrade-20210915200708-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0915 20:18:46.020694   87136 config.go:177] Loaded profile config "pause-20210915200708-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 20:18:46.020694   87136 delete.go:229] DeleteProfiles
	I0915 20:18:46.020694   87136 delete.go:257] Deleting pause-20210915200708-22848
	I0915 20:18:46.020694   87136 delete.go:262] pause-20210915200708-22848 configuration: &{Name:pause-20210915200708-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:pause-20210915200708-22848 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 20:18:46.024687   87136 out.go:177] * Deleting "pause-20210915200708-22848" in docker ...
	I0915 20:18:46.035698   87136 delete.go:48] deleting possible leftovers for pause-20210915200708-22848 (driver=docker) ...
	I0915 20:18:46.045770   87136 cli_runner.go:115] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io=pause-20210915200708-22848 --format {{.Names}}
	I0915 20:18:46.882205   87136 out.go:177] * Deleting container "pause-20210915200708-22848" ...
	I0915 20:18:46.915994   87136 cli_runner.go:115] Run: docker container inspect pause-20210915200708-22848 --format={{.State.Status}}
	I0915 20:18:47.703776   87136 cli_runner.go:115] Run: docker exec --privileged -t pause-20210915200708-22848 /bin/bash -c "sudo init 0"
	I0915 20:18:49.751930   87136 cli_runner.go:168] Completed: docker exec --privileged -t pause-20210915200708-22848 /bin/bash -c "sudo init 0": (2.0478674s)
	I0915 20:18:50.774400   87136 cli_runner.go:115] Run: docker container inspect pause-20210915200708-22848 --format={{.State.Status}}
	I0915 20:18:51.504231   87136 oci.go:649] temporary error: container pause-20210915200708-22848 status is Running but expect it to be exited
	I0915 20:18:51.504687   87136 oci.go:655] Successfully shutdown container pause-20210915200708-22848
	I0915 20:18:51.522711   87136 cli_runner.go:115] Run: docker rm -f -v pause-20210915200708-22848

                                                
                                                
** /stderr **
pause_test.go:132: failed to delete minikube with args: "out/minikube-windows-amd64.exe delete -p pause-20210915200708-22848 --alsologtostderr -v=5" : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/DeletePaused]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210915200708-22848
helpers_test.go:236: (dbg) docker inspect pause-20210915200708-22848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91",
	        "Created": "2021-09-15T20:07:32.4011154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 124014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T20:07:36.7750208Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/hostname",
	        "HostsPath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/hosts",
	        "LogPath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91-json.log",
	        "Name": "/pause-20210915200708-22848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20210915200708-22848:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210915200708-22848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68-init/diff:/var/lib/docker/overlay2/a259804ff45c264548e9459111f8eb7e789339b3253b50b62afde896e9e19e34/diff:/var/lib/docker/overlay2/61882a81480713e64bf02bef67583a0609b2be0589d08187547a88789584af86/diff:/var/lib/docker/overlay2/a41d1f5e24156c1d438fe25c567f3c3492d15cb77b1bf5545be9086be845138a/diff:/var/lib/docker/overlay2/86e30e10438032d0a02b54850ad0316347488f3d5b831234af1e91f943269850/diff:/var/lib/docker/overlay2/f6962936c0c1b0636454847e8e963a472786602e15a00d5e020827c2372acfce/diff:/var/lib/docker/overlay2/5eee83c6029359aefecbba85cc6d456e3a5a97c3ef6e9f4850e8a53c62b30ef5/diff:/var/lib/docker/overlay2/fdaa4e134ab960962e0a388adaa3a6aa59dd139cc016dfd4cdf4565bc80e8469/diff:/var/lib/docker/overlay2/9e1b9be7e17136fa81b0a224e2fab9704d3234ca119d87c14f9a676bbdb023f5/diff:/var/lib/docker/overlay2/ffe06185e93cb7ae8d48d84ea9be8817f2ae3d2aae85114ce41477579e23debd/diff:/var/lib/docker/overlay2/221713
20a621ffe79c2acb0c13308b1b0cd3bc94a4083992e7b8589b820c625c/diff:/var/lib/docker/overlay2/eb2fb3ccafd6cb1c26a9642601357b3e0563e9e9361a5ab359bf1af592a0d709/diff:/var/lib/docker/overlay2/6081368e802a14f6f6a7424eb7af3f5f29f85bf59ed0a0709ce25b53738095cb/diff:/var/lib/docker/overlay2/fd7176e5912a824a0543fa3ab5170921538a287401ff8a451c90e1ef0fd8adea/diff:/var/lib/docker/overlay2/eec5078968f5e7332ff82191a780be0efef38aef75ea7cd67723ab3d2760c281/diff:/var/lib/docker/overlay2/d18d41a44c04cb695c4b69ac0db0d5807cee4ca8a5a695629f97e2d8d9cf9461/diff:/var/lib/docker/overlay2/b125406c01cea6a83fa5515a19bb6822d1194fcd47eeb1ed541b9304804a54be/diff:/var/lib/docker/overlay2/b49ae7a2c3101c5b094f611e08fb7b68d8688cb3c333066f697aafc1dc7c2c7e/diff:/var/lib/docker/overlay2/ce599106d279966257baab0cc43ed0366d690702b449073e812a47ae6698dedf/diff:/var/lib/docker/overlay2/5f005c2e8ab4cd52b59f5118e6f5e352dd834afde547ba1ee7b71141319e3547/diff:/var/lib/docker/overlay2/2b1f9abca5d32e21fe1da66b2604d858599b74fc9359bd55e050cebccaba5c7d/diff:/var/lib/d
ocker/overlay2/a5f956d0de2a0313dfbaefb921518d8a75267b71a9e7c68207a81682db5394b5/diff:/var/lib/docker/overlay2/e0050af32b9eb0f12404cf384139cd48050d4a969d090faaa07b9f42fe954627/diff:/var/lib/docker/overlay2/f18c15fd90b361f7a13265b5426d985a47e261abde790665028916551b5218f3/diff:/var/lib/docker/overlay2/0f266ad6b65c857206fd10e121b74564370ca213f5706493619b6a590c496660/diff:/var/lib/docker/overlay2/fc044060d3681022984120753b0c02afc05afbb256dbdfc9f7f5e966e1d98820/diff:/var/lib/docker/overlay2/91df5011d1388013be2af7bb3097195366fd38d1f46d472e630aab583779f7c0/diff:/var/lib/docker/overlay2/f810a7fbc880b9ff7c367b14e34088e851fa045d860ce4bf4c49999fcf814a6e/diff:/var/lib/docker/overlay2/318584cae4acc059b81627e00ae703167673c73d234d6e64e894fc3500750f90/diff:/var/lib/docker/overlay2/a2e1d86ffb5aec517fe891619294d506621a002f4c53e8d3103d5d4ce777ebaf/diff:/var/lib/docker/overlay2/12fd1d215a6881aa03a06f2b8a5415b483530db121b120b66940e1e5cd2e1b96/diff:/var/lib/docker/overlay2/28bbbfc0404aecb7d7d79b4c2bfec07cd44260c922a982af523bda70bbd
7be20/diff:/var/lib/docker/overlay2/4dc0077174d58a8904abddfc67a48e6dd082a1eebc72518af19da37b4eff7b2c/diff:/var/lib/docker/overlay2/4d39db844b44258dbb67b16662175b453df7bfd43274abbf1968486539955750/diff:/var/lib/docker/overlay2/ca34d73c6c31358a3eb714a014a5961863e05dee505a1cfca2c8829380ce362b/diff:/var/lib/docker/overlay2/0c0595112799a0b3604c58158946fb3d0657c4198a6a72e12fbe29a74174d3ea/diff:/var/lib/docker/overlay2/5fc43276da56e90293816918613014e7cec7bedc292a062d39d034c95d56351d/diff:/var/lib/docker/overlay2/71a282cb60752128ee370ced1695c67c421341d364956818e5852fd6714a0e64/diff:/var/lib/docker/overlay2/07723c7054e35caae4987fa66d3d1fd44de0d2875612274dde2bf04e8349b0a0/diff:/var/lib/docker/overlay2/0433db88749fb49b0f02cc65b7113c97134270991a8a82bbe7ff4432aae7e502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210915200708-22848",
	                "Source": "/var/lib/docker/volumes/pause-20210915200708-22848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210915200708-22848",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210915200708-22848",
	                "name.minikube.sigs.k8s.io": "pause-20210915200708-22848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b303abbfc950a029f267f11d8fae0399114d5910193701a97b665a4d89b4f95",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57016"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57017"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57018"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57019"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57020"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5b303abbfc95",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210915200708-22848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6554305ea083",
	                        "pause-20210915200708-22848"
	                    ],
	                    "NetworkID": "2dbb42bd7b08376522eb0245187f89c840890e2bd5477636f0e988415ead885b",
	                    "EndpointID": "0624522ede3ba62b8b47fa2fdff97a9a6b6137753f98b2fb64c8b3edf410be73",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915200708-22848 -n pause-20210915200708-22848
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915200708-22848 -n pause-20210915200708-22848: exit status 3 (5.9073497s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect pause-20210915200708-22848 --format={{.State.Status}}" took an unusually long time: 2.1866752s
	* Restarting the docker service may improve performance.
	E0915 20:22:14.721670   17608 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E0915 20:22:14.721670   17608 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "pause-20210915200708-22848" host is not running, skipping log retrieval (state="Error")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/DeletePaused]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210915200708-22848
helpers_test.go:236: (dbg) docker inspect pause-20210915200708-22848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91",
	        "Created": "2021-09-15T20:07:32.4011154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 124014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T20:07:36.7750208Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/hostname",
	        "HostsPath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/hosts",
	        "LogPath": "/var/lib/docker/containers/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91/6554305ea083021e47ef95d9404a882456ffba7198c927c4a672a0b6a8ca0f91-json.log",
	        "Name": "/pause-20210915200708-22848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-20210915200708-22848:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210915200708-22848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68-init/diff:/var/lib/docker/overlay2/a259804ff45c264548e9459111f8eb7e789339b3253b50b62afde896e9e19e34/diff:/var/lib/docker/overlay2/61882a81480713e64bf02bef67583a0609b2be0589d08187547a88789584af86/diff:/var/lib/docker/overlay2/a41d1f5e24156c1d438fe25c567f3c3492d15cb77b1bf5545be9086be845138a/diff:/var/lib/docker/overlay2/86e30e10438032d0a02b54850ad0316347488f3d5b831234af1e91f943269850/diff:/var/lib/docker/overlay2/f6962936c0c1b0636454847e8e963a472786602e15a00d5e020827c2372acfce/diff:/var/lib/docker/overlay2/5eee83c6029359aefecbba85cc6d456e3a5a97c3ef6e9f4850e8a53c62b30ef5/diff:/var/lib/docker/overlay2/fdaa4e134ab960962e0a388adaa3a6aa59dd139cc016dfd4cdf4565bc80e8469/diff:/var/lib/docker/overlay2/9e1b9be7e17136fa81b0a224e2fab9704d3234ca119d87c14f9a676bbdb023f5/diff:/var/lib/docker/overlay2/ffe06185e93cb7ae8d48d84ea9be8817f2ae3d2aae85114ce41477579e23debd/diff:/var/lib/docker/overlay2/221713
20a621ffe79c2acb0c13308b1b0cd3bc94a4083992e7b8589b820c625c/diff:/var/lib/docker/overlay2/eb2fb3ccafd6cb1c26a9642601357b3e0563e9e9361a5ab359bf1af592a0d709/diff:/var/lib/docker/overlay2/6081368e802a14f6f6a7424eb7af3f5f29f85bf59ed0a0709ce25b53738095cb/diff:/var/lib/docker/overlay2/fd7176e5912a824a0543fa3ab5170921538a287401ff8a451c90e1ef0fd8adea/diff:/var/lib/docker/overlay2/eec5078968f5e7332ff82191a780be0efef38aef75ea7cd67723ab3d2760c281/diff:/var/lib/docker/overlay2/d18d41a44c04cb695c4b69ac0db0d5807cee4ca8a5a695629f97e2d8d9cf9461/diff:/var/lib/docker/overlay2/b125406c01cea6a83fa5515a19bb6822d1194fcd47eeb1ed541b9304804a54be/diff:/var/lib/docker/overlay2/b49ae7a2c3101c5b094f611e08fb7b68d8688cb3c333066f697aafc1dc7c2c7e/diff:/var/lib/docker/overlay2/ce599106d279966257baab0cc43ed0366d690702b449073e812a47ae6698dedf/diff:/var/lib/docker/overlay2/5f005c2e8ab4cd52b59f5118e6f5e352dd834afde547ba1ee7b71141319e3547/diff:/var/lib/docker/overlay2/2b1f9abca5d32e21fe1da66b2604d858599b74fc9359bd55e050cebccaba5c7d/diff:/var/lib/d
ocker/overlay2/a5f956d0de2a0313dfbaefb921518d8a75267b71a9e7c68207a81682db5394b5/diff:/var/lib/docker/overlay2/e0050af32b9eb0f12404cf384139cd48050d4a969d090faaa07b9f42fe954627/diff:/var/lib/docker/overlay2/f18c15fd90b361f7a13265b5426d985a47e261abde790665028916551b5218f3/diff:/var/lib/docker/overlay2/0f266ad6b65c857206fd10e121b74564370ca213f5706493619b6a590c496660/diff:/var/lib/docker/overlay2/fc044060d3681022984120753b0c02afc05afbb256dbdfc9f7f5e966e1d98820/diff:/var/lib/docker/overlay2/91df5011d1388013be2af7bb3097195366fd38d1f46d472e630aab583779f7c0/diff:/var/lib/docker/overlay2/f810a7fbc880b9ff7c367b14e34088e851fa045d860ce4bf4c49999fcf814a6e/diff:/var/lib/docker/overlay2/318584cae4acc059b81627e00ae703167673c73d234d6e64e894fc3500750f90/diff:/var/lib/docker/overlay2/a2e1d86ffb5aec517fe891619294d506621a002f4c53e8d3103d5d4ce777ebaf/diff:/var/lib/docker/overlay2/12fd1d215a6881aa03a06f2b8a5415b483530db121b120b66940e1e5cd2e1b96/diff:/var/lib/docker/overlay2/28bbbfc0404aecb7d7d79b4c2bfec07cd44260c922a982af523bda70bbd
7be20/diff:/var/lib/docker/overlay2/4dc0077174d58a8904abddfc67a48e6dd082a1eebc72518af19da37b4eff7b2c/diff:/var/lib/docker/overlay2/4d39db844b44258dbb67b16662175b453df7bfd43274abbf1968486539955750/diff:/var/lib/docker/overlay2/ca34d73c6c31358a3eb714a014a5961863e05dee505a1cfca2c8829380ce362b/diff:/var/lib/docker/overlay2/0c0595112799a0b3604c58158946fb3d0657c4198a6a72e12fbe29a74174d3ea/diff:/var/lib/docker/overlay2/5fc43276da56e90293816918613014e7cec7bedc292a062d39d034c95d56351d/diff:/var/lib/docker/overlay2/71a282cb60752128ee370ced1695c67c421341d364956818e5852fd6714a0e64/diff:/var/lib/docker/overlay2/07723c7054e35caae4987fa66d3d1fd44de0d2875612274dde2bf04e8349b0a0/diff:/var/lib/docker/overlay2/0433db88749fb49b0f02cc65b7113c97134270991a8a82bbe7ff4432aae7e502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8203f02270185999fc0731de996a5e2c9ac6a6c41edc6f940053bc9ec9f67a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210915200708-22848",
	                "Source": "/var/lib/docker/volumes/pause-20210915200708-22848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210915200708-22848",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210915200708-22848",
	                "name.minikube.sigs.k8s.io": "pause-20210915200708-22848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b303abbfc950a029f267f11d8fae0399114d5910193701a97b665a4d89b4f95",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57016"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57017"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57018"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57019"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57020"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5b303abbfc95",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210915200708-22848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6554305ea083",
	                        "pause-20210915200708-22848"
	                    ],
	                    "NetworkID": "2dbb42bd7b08376522eb0245187f89c840890e2bd5477636f0e988415ead885b",
	                    "EndpointID": "0624522ede3ba62b8b47fa2fdff97a9a6b6137753f98b2fb64c8b3edf410be73",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915200708-22848 -n pause-20210915200708-22848
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210915200708-22848 -n pause-20210915200708-22848: exit status 3 (5.6938876s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect pause-20210915200708-22848 --format={{.State.Status}}" took an unusually long time: 2.085803s
	* Restarting the docker service may improve performance.
	E0915 20:22:21.196850   88688 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E0915 20:22:21.197007   88688 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "pause-20210915200708-22848" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/DeletePaused (218.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (882.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20210915203352-22848 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0
E0915 20:50:46.745201   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20210915203352-22848 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0: exit status 1 (13m33.8396494s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20210915203352-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	* Kubernetes 1.22.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20210915203352-22848 in cluster old-k8s-version-20210915203352-22848
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20210915203352-22848" ...
	* Preparing Kubernetes v1.14.0 on Docker 20.10.8 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image kubernetesui/dashboard:v2.1.0
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 20:50:19.019001   59380 out.go:298] Setting OutFile to fd 2384 ...
	I0915 20:50:19.021105   59380 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 20:50:19.021105   59380 out.go:311] Setting ErrFile to fd 2480...
	I0915 20:50:19.021105   59380 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 20:50:19.048304   59380 out.go:305] Setting JSON to false
	I0915 20:50:19.056490   59380 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9158492,"bootTime":1622580527,"procs":160,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 20:50:19.056490   59380 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 20:50:19.069838   59380 out.go:177] * [old-k8s-version-20210915203352-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 20:50:19.070368   59380 notify.go:169] Checking for updates...
	I0915 20:50:19.073187   59380 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 20:50:19.076201   59380 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 20:50:19.079528   59380 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 20:50:19.086352   59380 config.go:177] Loaded profile config "old-k8s-version-20210915203352-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I0915 20:50:19.092398   59380 out.go:177] * Kubernetes 1.22.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.1
	I0915 20:50:19.093180   59380 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 20:50:21.137642   59380 docker.go:132] docker version: linux-20.10.5
	I0915 20:50:21.152466   59380 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 20:50:22.424565   59380 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2718958s)
	I0915 20:50:22.425415   59380 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:74 SystemTime:2021-09-15 20:50:21.8427854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 20:50:22.429874   59380 out.go:177] * Using the docker driver based on existing profile
	I0915 20:50:22.430793   59380 start.go:278] selected driver: docker
	I0915 20:50:22.430793   59380 start.go:751] validating driver "docker" against &{Name:old-k8s-version-20210915203352-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210915203352-22848 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: MultiNodeRequested:false ExtraDisks:0}
	I0915 20:50:22.431061   59380 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 20:50:22.571854   59380 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 20:50:23.838633   59380 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2665219s)
	I0915 20:50:23.839558   59380 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:74 SystemTime:2021-09-15 20:50:23.3020414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 20:50:23.840176   59380 start_flags.go:737] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 20:50:23.840176   59380 cni.go:93] Creating CNI manager for ""
	I0915 20:50:23.840176   59380 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 20:50:23.840383   59380 start_flags.go:278] config:
	{Name:old-k8s-version-20210915203352-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210915203352-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 20:50:23.842491   59380 out.go:177] * Starting control plane node old-k8s-version-20210915203352-22848 in cluster old-k8s-version-20210915203352-22848
	I0915 20:50:23.842491   59380 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 20:50:23.845491   59380 out.go:177] * Pulling base image ...
	I0915 20:50:23.846007   59380 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 20:50:23.846007   59380 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 20:50:23.846401   59380 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0915 20:50:23.846401   59380 cache.go:57] Caching tarball of preloaded images
	I0915 20:50:23.846920   59380 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 20:50:23.847686   59380 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0915 20:50:23.848133   59380 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\config.json ...
	I0915 20:50:24.764703   59380 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
	I0915 20:50:24.765104   59380 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
	I0915 20:50:24.765104   59380 cache.go:206] Successfully downloaded all kic artifacts
	I0915 20:50:24.765690   59380 start.go:313] acquiring machines lock for old-k8s-version-20210915203352-22848: {Name:mkdf40fe6814eb846215b4f404333c923849772e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 20:50:24.765831   59380 start.go:317] acquired machines lock for "old-k8s-version-20210915203352-22848" in 0s
	I0915 20:50:24.765831   59380 start.go:93] Skipping create...Using existing machine configuration
	I0915 20:50:24.765831   59380 fix.go:55] fixHost starting: 
	I0915 20:50:24.832135   59380 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210915203352-22848 --format={{.State.Status}}
	I0915 20:50:25.799129   59380 fix.go:108] recreateIfNeeded on old-k8s-version-20210915203352-22848: state=Stopped err=<nil>
	W0915 20:50:25.799406   59380 fix.go:134] unexpected machine state, will restart: <nil>
	I0915 20:50:25.803457   59380 out.go:177] * Restarting existing docker container for "old-k8s-version-20210915203352-22848" ...
	I0915 20:50:25.816664   59380 cli_runner.go:115] Run: docker start old-k8s-version-20210915203352-22848
	I0915 20:50:29.686528   59380 cli_runner.go:168] Completed: docker start old-k8s-version-20210915203352-22848: (3.8698873s)
	I0915 20:50:29.710614   59380 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210915203352-22848 --format={{.State.Status}}
	I0915 20:50:30.520615   59380 kic.go:420] container "old-k8s-version-20210915203352-22848" state is running.
	I0915 20:50:30.549428   59380 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210915203352-22848
	I0915 20:50:31.393630   59380 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\config.json ...
	I0915 20:50:31.397377   59380 machine.go:88] provisioning docker machine ...
	I0915 20:50:31.397377   59380 ubuntu.go:169] provisioning hostname "old-k8s-version-20210915203352-22848"
	I0915 20:50:31.421410   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:32.209847   59380 main.go:130] libmachine: Using SSH client type: native
	I0915 20:50:32.211235   59380 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 57811 <nil> <nil>}
	I0915 20:50:32.211335   59380 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210915203352-22848 && echo "old-k8s-version-20210915203352-22848" | sudo tee /etc/hostname
	I0915 20:50:32.238718   59380 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0915 20:50:36.054650   59380 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210915203352-22848
	
	I0915 20:50:36.077646   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:36.844437   59380 main.go:130] libmachine: Using SSH client type: native
	I0915 20:50:36.845038   59380 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 57811 <nil> <nil>}
	I0915 20:50:36.845038   59380 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210915203352-22848' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210915203352-22848/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210915203352-22848' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 20:50:37.367940   59380 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 20:50:37.367940   59380 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0915 20:50:37.367940   59380 ubuntu.go:177] setting up certificates
	I0915 20:50:37.367940   59380 provision.go:83] configureAuth start
	I0915 20:50:37.389047   59380 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210915203352-22848
	I0915 20:50:38.155224   59380 provision.go:138] copyHostCerts
	I0915 20:50:38.156223   59380 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0915 20:50:38.156223   59380 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0915 20:50:38.156223   59380 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 20:50:38.160579   59380 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0915 20:50:38.160579   59380 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0915 20:50:38.161554   59380 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 20:50:38.164613   59380 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0915 20:50:38.164846   59380 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0915 20:50:38.165253   59380 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1675 bytes)
	I0915 20:50:38.167645   59380 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-20210915203352-22848 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210915203352-22848]
	I0915 20:50:38.359335   59380 provision.go:172] copyRemoteCerts
	I0915 20:50:38.375372   59380 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 20:50:38.384388   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:39.212451   59380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57811 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210915203352-22848\id_rsa Username:docker}
	I0915 20:50:39.618675   59380 ssh_runner.go:192] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2433104s)
	I0915 20:50:39.619381   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 20:50:39.919989   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1281 bytes)
	I0915 20:50:40.140173   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 20:50:40.473316   59380 provision.go:86] duration metric: configureAuth took 3.1053946s
	I0915 20:50:40.473316   59380 ubuntu.go:193] setting minikube options for container-runtime
	I0915 20:50:40.473316   59380 config.go:177] Loaded profile config "old-k8s-version-20210915203352-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I0915 20:50:40.487030   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:41.278371   59380 main.go:130] libmachine: Using SSH client type: native
	I0915 20:50:41.280603   59380 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 57811 <nil> <nil>}
	I0915 20:50:41.281140   59380 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 20:50:41.870131   59380 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 20:50:41.870460   59380 ubuntu.go:71] root file system type: overlay
	I0915 20:50:41.870748   59380 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 20:50:41.886233   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:42.691241   59380 main.go:130] libmachine: Using SSH client type: native
	I0915 20:50:42.692081   59380 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 57811 <nil> <nil>}
	I0915 20:50:42.692081   59380 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 20:50:43.491557   59380 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 20:50:43.506402   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:44.338638   59380 main.go:130] libmachine: Using SSH client type: native
	I0915 20:50:44.339359   59380 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 57811 <nil> <nil>}
	I0915 20:50:44.339359   59380 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 20:50:45.015825   59380 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 20:50:45.015825   59380 machine.go:91] provisioned docker machine in 13.6185294s
	I0915 20:50:45.015825   59380 start.go:267] post-start starting for "old-k8s-version-20210915203352-22848" (driver="docker")
	I0915 20:50:45.015825   59380 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 20:50:45.035046   59380 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 20:50:45.056647   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:45.784464   59380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57811 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210915203352-22848\id_rsa Username:docker}
	I0915 20:50:46.055735   59380 ssh_runner.go:192] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0206945s)
	I0915 20:50:46.071647   59380 ssh_runner.go:152] Run: cat /etc/os-release
	I0915 20:50:46.108489   59380 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 20:50:46.108700   59380 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 20:50:46.108700   59380 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 20:50:46.109179   59380 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0915 20:50:46.110141   59380 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0915 20:50:46.111515   59380 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0915 20:50:46.114993   59380 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem -> 228482.pem in /etc/ssl/certs
	I0915 20:50:46.137660   59380 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
	I0915 20:50:46.247660   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem --> /etc/ssl/certs/228482.pem (1708 bytes)
	I0915 20:50:46.421672   59380 start.go:270] post-start completed in 1.4058555s
	I0915 20:50:46.429751   59380 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 20:50:46.444674   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:47.284656   59380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57811 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210915203352-22848\id_rsa Username:docker}
	I0915 20:50:47.578039   59380 ssh_runner.go:192] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1481129s)
	I0915 20:50:47.578179   59380 fix.go:57] fixHost completed within 22.8124833s
	I0915 20:50:47.578179   59380 start.go:80] releasing machines lock for "old-k8s-version-20210915203352-22848", held for 22.8124833s
	I0915 20:50:47.594692   59380 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210915203352-22848
	I0915 20:50:48.350276   59380 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0915 20:50:48.365172   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:48.365576   59380 ssh_runner.go:152] Run: systemctl --version
	I0915 20:50:48.387045   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:49.254145   59380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57811 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210915203352-22848\id_rsa Username:docker}
	I0915 20:50:49.259108   59380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57811 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210915203352-22848\id_rsa Username:docker}
	I0915 20:50:49.580900   59380 ssh_runner.go:192] Completed: systemctl --version: (1.2153314s)
	I0915 20:50:49.607404   59380 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I0915 20:50:49.822327   59380 ssh_runner.go:192] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.4720596s)
	I0915 20:50:49.840878   59380 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 20:50:49.941985   59380 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I0915 20:50:49.957886   59380 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I0915 20:50:50.032607   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 20:50:50.154084   59380 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I0915 20:50:50.786569   59380 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I0915 20:50:51.420071   59380 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 20:50:51.508000   59380 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I0915 20:50:52.091017   59380 ssh_runner.go:152] Run: sudo systemctl start docker
	I0915 20:50:52.231450   59380 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 20:50:52.632092   59380 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 20:50:52.966038   59380 out.go:204] * Preparing Kubernetes v1.14.0 on Docker 20.10.8 ...
	I0915 20:50:52.986220   59380 cli_runner.go:115] Run: docker exec -t old-k8s-version-20210915203352-22848 dig +short host.docker.internal
	I0915 20:50:54.820398   59380 cli_runner.go:168] Completed: docker exec -t old-k8s-version-20210915203352-22848 dig +short host.docker.internal: (1.8341886s)
	I0915 20:50:54.820573   59380 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0915 20:50:54.851036   59380 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0915 20:50:54.899948   59380 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 20:50:55.205287   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:50:56.067949   59380 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 20:50:56.090483   59380 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 20:50:56.469176   59380 docker.go:558] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-proxy:v1.14.0
	k8s.gcr.io/kube-apiserver:v1.14.0
	k8s.gcr.io/kube-controller-manager:v1.14.0
	k8s.gcr.io/kube-scheduler:v1.14.0
	k8s.gcr.io/coredns:1.3.1
	k8s.gcr.io/etcd:3.3.10
	busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0915 20:50:56.469176   59380 docker.go:489] Images already preloaded, skipping extraction
	I0915 20:50:56.481951   59380 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 20:50:56.819334   59380 docker.go:558] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-proxy:v1.14.0
	k8s.gcr.io/kube-controller-manager:v1.14.0
	k8s.gcr.io/kube-scheduler:v1.14.0
	k8s.gcr.io/kube-apiserver:v1.14.0
	k8s.gcr.io/coredns:1.3.1
	k8s.gcr.io/etcd:3.3.10
	busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0915 20:50:56.819334   59380 cache_images.go:78] Images are preloaded, skipping loading
	I0915 20:50:56.835351   59380 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
	I0915 20:50:57.905354   59380 ssh_runner.go:192] Completed: docker info --format {{.CgroupDriver}}: (1.069687s)
	I0915 20:50:57.906004   59380 cni.go:93] Creating CNI manager for ""
	I0915 20:50:57.906219   59380 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 20:50:57.906375   59380 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0915 20:50:57.906620   59380 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210915203352-22848 NodeName:old-k8s-version-20210915203352-22848 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.85.2 CgroupDriver:cgroupfs Clie
ntCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0915 20:50:57.906961   59380 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20210915203352-22848"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210915203352-22848
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 20:50:57.907465   59380 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20210915203352-22848 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210915203352-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0915 20:50:57.926342   59380 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0915 20:50:58.015064   59380 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 20:50:58.028858   59380 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 20:50:58.155715   59380 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (434 bytes)
	I0915 20:50:58.275723   59380 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 20:50:58.460220   59380 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2149 bytes)
	I0915 20:50:58.665432   59380 ssh_runner.go:152] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0915 20:50:58.704019   59380 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 20:50:58.847795   59380 certs.go:52] Setting up C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848 for IP: 192.168.85.2
	I0915 20:50:58.848546   59380 certs.go:179] skipping minikubeCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\ca.key
	I0915 20:50:58.848968   59380 certs.go:179] skipping proxyClientCA CA generation: C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key
	I0915 20:50:58.849723   59380 certs.go:293] skipping minikube-user signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.key
	I0915 20:50:58.850576   59380 certs.go:293] skipping minikube signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\apiserver.key.43b9df8c
	I0915 20:50:58.850848   59380 certs.go:293] skipping aggregator signed cert generation: C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\proxy-client.key
	I0915 20:50:58.852858   59380 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22848.pem (1338 bytes)
	W0915 20:50:58.853382   59380 certs.go:372] ignoring C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\22848_empty.pem, impossibly tiny 0 bytes
	I0915 20:50:58.853568   59380 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0915 20:50:58.854070   59380 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0915 20:50:58.854495   59380 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0915 20:50:58.855055   59380 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\certs\C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0915 20:50:58.865137   59380 certs.go:376] found cert: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem (1708 bytes)
	I0915 20:50:58.870177   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0915 20:50:59.017619   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 20:50:59.192677   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 20:50:59.378016   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 20:50:59.519256   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 20:50:59.678179   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0915 20:50:59.915794   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 20:51:00.106591   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 20:51:00.314082   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem --> /usr/share/ca-certificates/228482.pem (1708 bytes)
	I0915 20:51:00.449164   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 20:51:00.587977   59380 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\22848.pem --> /usr/share/ca-certificates/22848.pem (1338 bytes)
	I0915 20:51:00.749821   59380 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 20:51:00.912270   59380 ssh_runner.go:152] Run: openssl version
	I0915 20:51:00.994817   59380 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228482.pem && ln -fs /usr/share/ca-certificates/228482.pem /etc/ssl/certs/228482.pem"
	I0915 20:51:01.085003   59380 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/228482.pem
	I0915 20:51:01.138775   59380 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Sep 15 18:55 /usr/share/ca-certificates/228482.pem
	I0915 20:51:01.168014   59380 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228482.pem
	I0915 20:51:01.233245   59380 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228482.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 20:51:01.305535   59380 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 20:51:01.411409   59380 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 20:51:01.456111   59380 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 18:34 /usr/share/ca-certificates/minikubeCA.pem
	I0915 20:51:01.484259   59380 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 20:51:01.536569   59380 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 20:51:01.623857   59380 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22848.pem && ln -fs /usr/share/ca-certificates/22848.pem /etc/ssl/certs/22848.pem"
	I0915 20:51:01.740760   59380 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/22848.pem
	I0915 20:51:01.775316   59380 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Sep 15 18:55 /usr/share/ca-certificates/22848.pem
	I0915 20:51:01.794658   59380 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22848.pem
	I0915 20:51:01.871995   59380 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22848.pem /etc/ssl/certs/51391683.0"
	I0915 20:51:01.960664   59380 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210915203352-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210915203352-22848 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeReques
ted:false ExtraDisks:0}
	I0915 20:51:01.972520   59380 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 20:51:02.270011   59380 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 20:51:02.341808   59380 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0915 20:51:02.345324   59380 kubeadm.go:600] restartCluster start
	I0915 20:51:02.376944   59380 ssh_runner.go:152] Run: sudo test -d /data/minikube
	I0915 20:51:02.442171   59380 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:02.456637   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 20:51:03.201303   59380 kubeconfig.go:117] verify returned: extract IP: "old-k8s-version-20210915203352-22848" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 20:51:03.202928   59380 kubeconfig.go:128] "old-k8s-version-20210915203352-22848" context is missing from C:\Users\jenkins\minikube-integration\kubeconfig - will repair!
	I0915 20:51:03.205445   59380 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 20:51:03.264106   59380 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 20:51:03.352176   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:03.367722   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:03.454674   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:03.655427   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:03.672146   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:03.756122   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:03.855916   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:03.877207   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:03.963802   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:04.058696   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:04.076310   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:04.185207   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:04.256758   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:04.273723   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:04.394739   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:04.455581   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:04.470530   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:04.559519   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:04.655428   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:04.668227   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:04.745180   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:04.856244   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:04.877763   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:04.980750   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:05.059908   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:05.076776   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:05.176351   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:05.255720   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:05.275914   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:05.417375   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:05.455846   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:05.478731   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:05.596342   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:05.656902   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:05.672402   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:05.770655   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:05.855178   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:05.886580   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:06.052438   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:06.055259   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:06.079971   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:06.200676   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:06.255708   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:06.272544   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:06.422330   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:06.455143   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:06.469648   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:06.647162   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:06.647162   59380 api_server.go:164] Checking apiserver status ...
	I0915 20:51:06.660160   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0915 20:51:06.819274   59380 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0915 20:51:06.819274   59380 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0915 20:51:06.819274   59380 kubeadm.go:1032] stopping kube-system containers ...
	I0915 20:51:06.843295   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 20:51:07.463836   59380 docker.go:390] Stopping containers: [2e8024166c5f 028e632b2947 6725e46f9238 3b9130022485 379a838db2dd d108901bec10 e9654a15ffc5 53f543bce41d 08719ca43a1d 54fce4ae4236 d264b3e84f3f cdd4124f0db3 4b80283bd759 dd0d351f0d00 09a68128c04f 31222e7e4d55 793c21b00c62]
	I0915 20:51:07.477079   59380 ssh_runner.go:152] Run: docker stop 2e8024166c5f 028e632b2947 6725e46f9238 3b9130022485 379a838db2dd d108901bec10 e9654a15ffc5 53f543bce41d 08719ca43a1d 54fce4ae4236 d264b3e84f3f cdd4124f0db3 4b80283bd759 dd0d351f0d00 09a68128c04f 31222e7e4d55 793c21b00c62
	I0915 20:51:08.026849   59380 ssh_runner.go:152] Run: sudo systemctl stop kubelet
	I0915 20:51:08.205321   59380 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 20:51:08.355065   59380 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5747 Sep 15 20:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Sep 15 20:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Sep 15 20:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Sep 15 20:43 /etc/kubernetes/scheduler.conf
	
	I0915 20:51:08.374545   59380 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 20:51:08.485693   59380 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 20:51:08.582349   59380 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 20:51:08.688440   59380 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 20:51:08.777056   59380 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 20:51:08.853815   59380 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0915 20:51:08.853815   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 20:51:10.058340   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": (1.2045321s)
	I0915 20:51:10.058787   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 20:51:13.172892   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.1141241s)
	I0915 20:51:13.172892   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 20:51:15.322182   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (2.1493022s)
	I0915 20:51:15.322182   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 20:51:15.801312   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 20:51:16.159213   59380 api_server.go:50] waiting for apiserver process to appear ...
	I0915 20:51:16.183842   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:16.942183   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:17.452196   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:17.954055   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:18.452594   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:18.954226   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:19.447346   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:19.947962   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:20.454240   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:20.954157   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:21.444601   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:21.953115   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:22.439832   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:22.941903   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:23.459660   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:23.951082   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:24.444646   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:24.938030   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:25.444303   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:25.941836   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:26.444681   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:26.956056   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:27.456741   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:27.953989   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:28.442249   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:28.940934   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:29.445051   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:29.948449   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:30.442541   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:30.942611   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:31.444759   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:31.946106   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:32.445762   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:32.944188   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:33.450796   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:33.943799   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:34.451407   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:34.942634   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:35.437475   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:35.951797   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:36.962913   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:37.940358   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:38.957033   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:39.435185   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 20:51:40.069090   59380 api_server.go:70] duration metric: took 23.9100213s to wait for apiserver process to appear ...
	I0915 20:51:40.069280   59380 api_server.go:86] waiting for apiserver healthz status ...
	I0915 20:51:40.069280   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:51:40.096979   59380 api_server.go:255] stopped: https://127.0.0.1:57815/healthz: Get "https://127.0.0.1:57815/healthz": EOF
	I0915 20:51:40.597242   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:51:45.599023   59380 api_server.go:255] stopped: https://127.0.0.1:57815/healthz: Get "https://127.0.0.1:57815/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 20:51:46.101120   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:51:51.102079   59380 api_server.go:255] stopped: https://127.0.0.1:57815/healthz: Get "https://127.0.0.1:57815/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 20:51:51.597167   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:51:56.598617   59380 api_server.go:255] stopped: https://127.0.0.1:57815/healthz: Get "https://127.0.0.1:57815/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 20:51:57.098567   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:02.101726   59380 api_server.go:255] stopped: https://127.0.0.1:57815/healthz: Get "https://127.0.0.1:57815/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 20:52:02.597329   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:07.598604   59380 api_server.go:255] stopped: https://127.0.0.1:57815/healthz: Get "https://127.0.0.1:57815/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 20:52:08.098090   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:13.101077   59380 api_server.go:255] stopped: https://127.0.0.1:57815/healthz: Get "https://127.0.0.1:57815/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 20:52:13.605640   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:17.901897   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 20:52:17.901897   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 20:52:18.097030   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:18.575121   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 20:52:18.575121   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 20:52:18.597854   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:18.807643   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0915 20:52:18.807884   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0915 20:52:19.097345   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:19.222325   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:19.222684   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:19.598017   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:19.719343   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:19.719343   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:20.098665   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:22.159035   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:22.159480   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:22.598490   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:23.468592   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:23.468788   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:23.597306   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:23.760928   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:23.761132   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:24.097784   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:24.732468   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:24.732830   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:25.099513   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:25.286447   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:25.286915   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:25.597539   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:25.946230   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:25.946844   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:26.098234   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:26.160786   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:26.160786   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:26.598381   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:26.740583   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:26.740785   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:27.097330   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:27.310201   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0915 20:52:27.310201   59380 api_server.go:101] status: https://127.0.0.1:57815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0915 20:52:27.598721   59380 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57815/healthz ...
	I0915 20:52:27.777414   59380 api_server.go:265] https://127.0.0.1:57815/healthz returned 200:
	ok
	I0915 20:52:28.151476   59380 api_server.go:139] control plane version: v1.14.0
	I0915 20:52:28.151476   59380 api_server.go:129] duration metric: took 48.0824883s to wait for apiserver health ...
	I0915 20:52:28.152445   59380 cni.go:93] Creating CNI manager for ""
	I0915 20:52:28.152445   59380 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 20:52:28.152445   59380 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 20:52:28.270836   59380 system_pods.go:59] 8 kube-system pods found
	I0915 20:52:28.270836   59380 system_pods.go:61] "coredns-fb8b8dccf-7ks7k" [dbd5260b-1665-11ec-901b-02421148780c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0915 20:52:28.270836   59380 system_pods.go:61] "etcd-old-k8s-version-20210915203352-22848" [e30f5d24-1665-11ec-901b-02421148780c] Running
	I0915 20:52:28.270836   59380 system_pods.go:61] "kube-apiserver-old-k8s-version-20210915203352-22848" [e8879afc-1665-11ec-901b-02421148780c] Running
	I0915 20:52:28.270836   59380 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210915203352-22848" [d39e09db-1666-11ec-96ab-0242535a2e8f] Pending
	I0915 20:52:28.270836   59380 system_pods.go:61] "kube-proxy-f2dsj" [dbdeb1ac-1665-11ec-901b-02421148780c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0915 20:52:28.270836   59380 system_pods.go:61] "kube-scheduler-old-k8s-version-20210915203352-22848" [dab2ebb8-1665-11ec-901b-02421148780c] Running
	I0915 20:52:28.270836   59380 system_pods.go:61] "metrics-server-8546d8b77b-fth4j" [71867ad0-1666-11ec-901b-02421148780c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 20:52:28.270836   59380 system_pods.go:61] "storage-provisioner" [f3bbc18f-1665-11ec-901b-02421148780c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0915 20:52:28.271106   59380 system_pods.go:74] duration metric: took 118.6614ms to wait for pod list to return data ...
	I0915 20:52:28.271106   59380 node_conditions.go:102] verifying NodePressure condition ...
	I0915 20:52:28.673765   59380 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 20:52:28.673765   59380 node_conditions.go:123] node cpu capacity is 4
	I0915 20:52:28.673765   59380 node_conditions.go:105] duration metric: took 402.6616ms to run NodePressure ...
	I0915 20:52:28.673765   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 20:52:36.364170   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (7.6904516s)
	I0915 20:52:36.364383   59380 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0915 20:52:36.460260   59380 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0915 20:52:36.826605   59380 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0915 20:52:37.431022   59380 retry.go:31] will retry after 655.06503ms: kubelet not initialised
	I0915 20:52:38.414740   59380 retry.go:31] will retry after 791.196345ms: kubelet not initialised
	I0915 20:52:39.314909   59380 retry.go:31] will retry after 1.170244332s: kubelet not initialised
	I0915 20:52:40.519753   59380 retry.go:31] will retry after 2.253109428s: kubelet not initialised
	I0915 20:52:42.842868   59380 retry.go:31] will retry after 1.610739793s: kubelet not initialised
	I0915 20:52:44.519102   59380 kubeadm.go:746] kubelet initialised
	I0915 20:52:44.519102   59380 kubeadm.go:747] duration metric: took 8.1547682s waiting for restarted kubelet to initialise ...
	I0915 20:52:44.519436   59380 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 20:52:44.595891   59380 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace to be "Ready" ...
	I0915 20:52:46.772539   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:52:49.463886   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:52:52.070609   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:52:54.277685   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:52:56.765054   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:52:58.903459   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:00.910720   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:03.365243   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:05.774985   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:07.859725   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:10.226076   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:12.770322   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:15.324821   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:17.327773   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:19.837662   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:22.600664   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:24.761973   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:27.263182   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:29.321626   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:31.773950   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:33.825958   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:35.829773   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:37.837882   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:39.916230   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:42.297871   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:44.733515   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:47.208400   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:47.975258   59380 pod_ready.go:92] pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace has status "Ready":"True"
	I0915 20:53:47.975456   59380 pod_ready.go:81] duration metric: took 1m3.3799564s waiting for pod "coredns-fb8b8dccf-7ks7k" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:47.975456   59380 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:48.508921   59380 pod_ready.go:92] pod "etcd-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:53:48.508921   59380 pod_ready.go:81] duration metric: took 533.4685ms waiting for pod "etcd-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:48.509171   59380 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:48.764240   59380 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:53:48.764240   59380 pod_ready.go:81] duration metric: took 255.0704ms waiting for pod "kube-apiserver-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:48.764240   59380 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:48.834352   59380 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:53:48.834685   59380 pod_ready.go:81] duration metric: took 70.2304ms waiting for pod "kube-controller-manager-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:48.834685   59380 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f2dsj" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:49.124943   59380 pod_ready.go:92] pod "kube-proxy-f2dsj" in "kube-system" namespace has status "Ready":"True"
	I0915 20:53:49.125303   59380 pod_ready.go:81] duration metric: took 290.6196ms waiting for pod "kube-proxy-f2dsj" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:49.125303   59380 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:49.400080   59380 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 20:53:49.400080   59380 pod_ready.go:81] duration metric: took 274.7788ms waiting for pod "kube-scheduler-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:49.400080   59380 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace to be "Ready" ...
	I0915 20:53:51.534637   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:53.838596   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:56.347980   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:53:58.843524   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:01.311631   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:03.343745   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:05.877698   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:08.298994   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:10.398890   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:12.818997   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:14.823374   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:16.879593   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:18.930122   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:21.155792   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:23.517364   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:26.185696   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:28.578935   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:30.965313   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:32.995717   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:35.620458   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:38.344059   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:40.416874   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:42.560861   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:44.709620   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:46.866714   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:49.159726   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:51.430326   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:53.836833   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:55.903062   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:54:58.307083   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:00.377411   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:02.612539   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:04.966661   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:07.381013   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:09.857241   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:11.905267   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:14.556622   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:16.932276   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:18.973358   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:21.447986   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:23.582356   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:25.934739   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:28.372448   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:30.661473   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:32.834603   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:34.858209   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:36.886962   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:39.568057   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:41.950835   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:44.376520   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:46.946597   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:49.348524   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:51.609340   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:53.855014   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:55.861557   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:55:58.134156   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:00.491453   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:02.563196   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:04.662009   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:06.811787   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:09.023691   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:11.469246   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:13.979282   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:16.342884   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:18.821882   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:21.054554   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:23.309111   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:25.325682   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:27.798230   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:30.334759   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:32.794389   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:34.815938   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:37.365632   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:40.067483   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:42.290154   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:44.470170   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:46.838919   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:49.370150   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:51.767444   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:53.816716   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:56.288501   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:56:58.313885   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:00.820529   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:02.827101   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:05.346235   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:07.782251   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:09.861993   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:12.308669   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:14.853447   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:17.269959   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:19.270409   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:21.326770   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:23.794349   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:25.827532   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:27.871943   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:30.334354   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:32.839854   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:34.851559   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:36.855525   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:38.884414   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:40.903520   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:43.357043   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:45.818801   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:49.048547   59380 pod_ready.go:102] pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace has status "Ready":"False"
	I0915 20:57:49.742929   59380 pod_ready.go:81] duration metric: took 4m0.3443546s waiting for pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace to be "Ready" ...
	E0915 20:57:49.742929   59380 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-fth4j" in "kube-system" namespace to be "Ready" (will not retry!)
	I0915 20:57:49.742929   59380 pod_ready.go:38] duration metric: took 5m5.225399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 20:57:49.743476   59380 kubeadm.go:604] restartCluster took 6m47.4006774s
	W0915 20:57:49.744366   59380 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0915 20:57:49.744366   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0915 20:58:16.685409   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (26.9412131s)
	I0915 20:58:16.715636   59380 ssh_runner.go:152] Run: sudo systemctl stop -f kubelet
	I0915 20:58:16.830747   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 20:58:17.199845   59380 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 20:58:17.305415   59380 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0915 20:58:17.341076   59380 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 20:58:17.438300   59380 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 20:58:17.438794   59380 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 20:59:49.084926   59380 out.go:204]   - Generating certificates and keys ...
	I0915 20:59:49.090402   59380 out.go:204]   - Booting up control plane ...
	I0915 20:59:49.095946   59380 out.go:204]   - Configuring RBAC rules ...
	I0915 20:59:49.108830   59380 cni.go:93] Creating CNI manager for ""
	I0915 20:59:49.109142   59380 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 20:59:49.109812   59380 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 20:59:49.127056   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 20:59:49.127056   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.23.0 minikube.k8s.io/commit=0d321606059ead2904f4f5ddd59a9a7026c7ee04 minikube.k8s.io/name=old-k8s-version-20210915203352-22848 minikube.k8s.io/updated_at=2021_09_15T20_59_49_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 20:59:50.174156   59380 ssh_runner.go:192] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.0643503s)
	I0915 20:59:50.174156   59380 ops.go:34] apiserver oom_adj: 16
	I0915 20:59:50.174538   59380 ops.go:39] adjusting apiserver oom_adj to -10
	I0915 20:59:50.174538   59380 ssh_runner.go:152] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 20:59:55.533807   59380 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.23.0 minikube.k8s.io/commit=0d321606059ead2904f4f5ddd59a9a7026c7ee04 minikube.k8s.io/name=old-k8s-version-20210915203352-22848 minikube.k8s.io/updated_at=2021_09_15T20_59_49_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (6.4067919s)
	I0915 20:59:55.534008   59380 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (6.4069927s)
	I0915 20:59:55.534008   59380 ssh_runner.go:192] Completed: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj": (5.3595038s)
	I0915 20:59:55.554399   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 20:59:56.820357   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 20:59:57.833308   59380 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.0129572s)
	I0915 20:59:58.323287   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 20:59:59.820293   59380 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.4970155s)
	I0915 21:00:00.317178   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 21:00:01.459280   59380 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.1421093s)
	I0915 21:00:01.834158   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 21:00:03.722510   59380 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.8880671s)
	I0915 21:00:03.816411   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 21:00:06.826815   59380 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.010423s)
	I0915 21:00:07.315039   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 21:00:09.286915   59380 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.9718888s)
	I0915 21:00:09.330531   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 21:00:11.976283   59380 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.6457689s)
	I0915 21:00:12.321475   59380 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 21:00:15.342524   59380 ssh_runner.go:192] Completed: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.0210684s)
	I0915 21:00:15.342683   59380 kubeadm.go:985] duration metric: took 26.2327223s to wait for elevateKubeSystemPrivileges.
	I0915 21:00:15.342683   59380 kubeadm.go:392] StartCluster complete in 9m13.3854457s
	I0915 21:00:15.342683   59380 settings.go:142] acquiring lock: {Name:mk81656fcf8bcddd49caaa1adb1c177165a02100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 21:00:15.343429   59380 settings.go:150] Updating kubeconfig:  C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 21:00:15.348588   59380 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\kubeconfig: {Name:mk312e0248780fd448f3a83862df8ee597f47373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 21:00:16.208206   59380 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210915203352-22848" rescaled to 1
	I0915 21:00:16.208402   59380 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0915 21:00:16.212033   59380 out.go:177] * Verifying Kubernetes components...
	I0915 21:00:16.210587   59380 addons.go:404] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0915 21:00:16.210934   59380 config.go:177] Loaded profile config "old-k8s-version-20210915203352-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I0915 21:00:16.212325   59380 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20210915203352-22848"
	I0915 21:00:16.212531   59380 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20210915203352-22848"
	I0915 21:00:16.212531   59380 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20210915203352-22848"
	I0915 21:00:16.212531   59380 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210915203352-22848"
	I0915 21:00:16.212531   59380 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20210915203352-22848"
	I0915 21:00:16.212943   59380 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20210915203352-22848"
	W0915 21:00:16.212943   59380 addons.go:165] addon dashboard should already be in state true
	I0915 21:00:16.212943   59380 host.go:66] Checking if "old-k8s-version-20210915203352-22848" exists ...
	I0915 21:00:16.212943   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	W0915 21:00:16.212531   59380 addons.go:165] addon storage-provisioner should already be in state true
	I0915 21:00:16.212531   59380 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20210915203352-22848"
	I0915 21:00:16.216749   59380 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20210915203352-22848"
	W0915 21:00:16.216749   59380 addons.go:165] addon metrics-server should already be in state true
	I0915 21:00:16.216749   59380 host.go:66] Checking if "old-k8s-version-20210915203352-22848" exists ...
	I0915 21:00:16.222413   59380 host.go:66] Checking if "old-k8s-version-20210915203352-22848" exists ...
	I0915 21:00:16.231099   59380 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 21:00:16.240686   59380 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210915203352-22848 --format={{.State.Status}}
	I0915 21:00:16.242829   59380 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210915203352-22848 --format={{.State.Status}}
	I0915 21:00:16.273813   59380 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210915203352-22848 --format={{.State.Status}}
	I0915 21:00:16.277124   59380 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210915203352-22848 --format={{.State.Status}}
	I0915 21:00:17.194314   59380 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0915 21:00:17.198127   59380 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0915 21:00:17.198478   59380 addons.go:337] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0915 21:00:17.198597   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0915 21:00:17.215090   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 21:00:17.251327   59380 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 21:00:17.251946   59380 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 21:00:17.252074   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 21:00:17.272371   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 21:00:17.301744   59380 cli_runner.go:168] Completed: docker container inspect old-k8s-version-20210915203352-22848 --format={{.State.Status}}: (1.0279378s)
	I0915 21:00:17.303718   59380 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0915 21:00:17.303718   59380 addons.go:337] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 21:00:17.303718   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0915 21:00:17.312721   59380 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20210915203352-22848"
	W0915 21:00:17.312721   59380 addons.go:165] addon default-storageclass should already be in state true
	I0915 21:00:17.312721   59380 host.go:66] Checking if "old-k8s-version-20210915203352-22848" exists ...
	I0915 21:00:17.314722   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 21:00:17.339723   59380 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210915203352-22848 --format={{.State.Status}}
	I0915 21:00:18.208562   59380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57811 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210915203352-22848\id_rsa Username:docker}
	I0915 21:00:18.208562   59380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57811 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210915203352-22848\id_rsa Username:docker}
	I0915 21:00:18.256256   59380 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 21:00:18.256256   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 21:00:18.269178   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 21:00:18.338679   59380 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848: (1.0239637s)
	I0915 21:00:18.339893   59380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57811 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210915203352-22848\id_rsa Username:docker}
	I0915 21:00:19.046462   59380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57811 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\old-k8s-version-20210915203352-22848\id_rsa Username:docker}
	I0915 21:00:21.534926   59380 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0915 21:00:21.535195   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0915 21:00:22.530124   59380 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 21:00:22.798909   59380 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 21:00:23.256423   59380 addons.go:337] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0915 21:00:23.256645   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0915 21:00:24.129638   59380 addons.go:337] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 21:00:24.129822   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0915 21:00:24.888014   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (8.6723847s)
	I0915 21:00:24.888573   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 21:00:24.888573   59380 ssh_runner.go:192] Completed: sudo systemctl is-active --quiet service kubelet: (8.6573527s)
	I0915 21:00:24.904827   59380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210915203352-22848
	I0915 21:00:25.752836   59380 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210915203352-22848" to be "Ready" ...
	I0915 21:00:26.081216   59380 node_ready.go:49] node "old-k8s-version-20210915203352-22848" has status "Ready":"True"
	I0915 21:00:26.081507   59380 node_ready.go:38] duration metric: took 328.673ms waiting for node "old-k8s-version-20210915203352-22848" to be "Ready" ...
	I0915 21:00:26.082206   59380 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 21:00:26.195826   59380 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-hgzn5" in "kube-system" namespace to be "Ready" ...
	I0915 21:00:27.937283   59380 addons.go:337] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0915 21:00:27.937283   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0915 21:00:28.362298   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-hgzn5" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:28.638878   59380 addons.go:337] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 21:00:28.638878   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0915 21:00:28.967361   59380 addons.go:337] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0915 21:00:28.967553   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0915 21:00:30.473150   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-hgzn5" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:31.268668   59380 pod_ready.go:97] error getting pod "coredns-fb8b8dccf-hgzn5" in "kube-system" namespace (skipping!): pods "coredns-fb8b8dccf-hgzn5" not found
	I0915 21:00:31.268668   59380 pod_ready.go:81] duration metric: took 5.0728748s waiting for pod "coredns-fb8b8dccf-hgzn5" in "kube-system" namespace to be "Ready" ...
	E0915 21:00:31.268668   59380 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-fb8b8dccf-hgzn5" in "kube-system" namespace (skipping!): pods "coredns-fb8b8dccf-hgzn5" not found
	I0915 21:00:31.268668   59380 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace to be "Ready" ...
	I0915 21:00:32.870358   59380 addons.go:337] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0915 21:00:32.870800   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0915 21:00:33.467789   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:33.933443   59380 addons.go:337] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 21:00:33.933622   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0915 21:00:34.299843   59380 addons.go:337] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0915 21:00:34.299843   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0915 21:00:35.648121   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:38.115418   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:38.828682   59380 addons.go:337] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0915 21:00:38.829102   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0915 21:00:40.251037   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:42.568268   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:44.822885   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:45.195360   59380 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 21:00:47.494547   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:47.552548   59380 addons.go:337] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0915 21:00:47.552816   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0915 21:00:49.653794   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:52.146838   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:54.151419   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:56.471250   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:00:56.543060   59380 addons.go:337] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0915 21:00:56.543244   59380 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0915 21:00:59.085497   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:01.110633   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:03.117434   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:05.434982   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:05.648655   59380 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0915 21:01:07.554451   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:09.951268   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:12.127879   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:18.201727   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:20.176124   59380 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (57.6460696s)
	I0915 21:01:20.176481   59380 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (57.3777192s)
	I0915 21:01:20.176638   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (55.2882621s)
	I0915 21:01:20.176816   59380 start.go:729] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0915 21:01:20.663497   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:22.936576   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:23.246148   59380 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (38.0510324s)
	I0915 21:01:23.246535   59380 addons.go:375] Verifying addon metrics-server=true in "old-k8s-version-20210915203352-22848"
	I0915 21:01:25.292294   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:27.574800   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:29.704252   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:30.670252   59380 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (25.0215831s)
	I0915 21:01:30.675627   59380 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0915 21:01:30.675920   59380 addons.go:406] enableAddons completed in 1m14.4658098s
	I0915 21:01:31.930045   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:33.957283   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:36.008270   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:38.462846   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:40.477833   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:42.492692   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:45.004798   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:47.433645   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:49.452363   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:51.525296   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:53.947935   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:56.027075   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:58.606645   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:02:00.749044   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:02:02.986676   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:02:05.604697   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:02:08.342362   59380 pod_ready.go:92] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.342362   59380 pod_ready.go:81] duration metric: took 1m37.0743164s waiting for pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.342628   59380 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.431670   59380 pod_ready.go:92] pod "etcd-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.431837   59380 pod_ready.go:81] duration metric: took 89.2095ms waiting for pod "etcd-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.431837   59380 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.530628   59380 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.530628   59380 pod_ready.go:81] duration metric: took 98.7918ms waiting for pod "kube-apiserver-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.530628   59380 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.623612   59380 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.623744   59380 pod_ready.go:81] duration metric: took 93.1164ms waiting for pod "kube-controller-manager-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.623744   59380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fjmvd" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.682303   59380 pod_ready.go:92] pod "kube-proxy-fjmvd" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.682576   59380 pod_ready.go:81] duration metric: took 58.736ms waiting for pod "kube-proxy-fjmvd" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.682576   59380 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.764335   59380 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.764335   59380 pod_ready.go:81] duration metric: took 81.7595ms waiting for pod "kube-scheduler-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.764335   59380 pod_ready.go:38] duration metric: took 1m42.6827872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 21:02:08.764335   59380 api_server.go:50] waiting for apiserver process to appear ...
	I0915 21:02:08.788795   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 21:02:11.886042   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (3.0970689s)
	I0915 21:02:11.886149   59380 logs.go:270] 1 containers: [3537b9c6e4ed]
	I0915 21:02:11.900760   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 21:02:14.320564   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (2.4194268s)
	I0915 21:02:14.320721   59380 logs.go:270] 1 containers: [0b52d929b618]
	I0915 21:02:14.340316   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 21:02:16.462510   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (2.1221147s)
	I0915 21:02:16.462510   59380 logs.go:270] 2 containers: [56f408c7ef80 edc543ebc579]
	I0915 21:02:16.481520   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 21:02:17.909577   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (1.4279284s)
	I0915 21:02:17.909577   59380 logs.go:270] 1 containers: [f62b255531be]
	I0915 21:02:17.922156   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 21:02:19.029541   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (1.1073925s)
	I0915 21:02:19.030029   59380 logs.go:270] 1 containers: [b6a7846cc5e6]
	I0915 21:02:19.057397   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 21:02:20.309647   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (1.2511439s)
	I0915 21:02:20.309647   59380 logs.go:270] 1 containers: [cf62060e4365]
	I0915 21:02:20.325297   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 21:02:22.494620   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (2.1690651s)
	I0915 21:02:22.494620   59380 logs.go:270] 1 containers: [2708aca9f909]
	I0915 21:02:22.513090   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 21:02:23.899916   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (1.3868346s)
	I0915 21:02:23.900058   59380 logs.go:270] 2 containers: [5dcc93538d3a 2444d44bba6b]
	I0915 21:02:23.900058   59380 logs.go:123] Gathering logs for kube-apiserver [3537b9c6e4ed] ...
	I0915 21:02:23.900058   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 3537b9c6e4ed"
	I0915 21:02:26.584619   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 3537b9c6e4ed": (2.6845789s)
	I0915 21:02:26.628390   59380 logs.go:123] Gathering logs for etcd [0b52d929b618] ...
	I0915 21:02:26.628390   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 0b52d929b618"
	I0915 21:02:29.188968   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 0b52d929b618": (2.5605947s)
	I0915 21:02:29.222295   59380 logs.go:123] Gathering logs for coredns [56f408c7ef80] ...
	I0915 21:02:29.222653   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 56f408c7ef80"
	I0915 21:02:30.913622   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 56f408c7ef80": (1.6909792s)
	I0915 21:02:30.914656   59380 logs.go:123] Gathering logs for Docker ...
	I0915 21:02:30.914656   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 21:02:31.073495   59380 logs.go:123] Gathering logs for container status ...
	I0915 21:02:31.073495   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 21:02:32.132766   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.0592772s)
	I0915 21:02:32.134065   59380 logs.go:123] Gathering logs for dmesg ...
	I0915 21:02:32.134065   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 21:02:32.761531   59380 logs.go:123] Gathering logs for describe nodes ...
	I0915 21:02:32.761774   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 21:02:40.222423   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (7.4606972s)
	I0915 21:02:40.248362   59380 logs.go:123] Gathering logs for kube-scheduler [f62b255531be] ...
	I0915 21:02:40.250291   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 f62b255531be"
	I0915 21:02:44.722028   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 f62b255531be": (4.4715535s)
	I0915 21:02:44.739482   59380 logs.go:123] Gathering logs for storage-provisioner [2708aca9f909] ...
	I0915 21:02:44.739702   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 2708aca9f909"
	I0915 21:02:48.982771   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 2708aca9f909": (4.2430962s)
	I0915 21:02:48.982771   59380 logs.go:123] Gathering logs for kube-controller-manager [5dcc93538d3a] ...
	I0915 21:02:48.982771   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5dcc93538d3a"
	I0915 21:02:57.130767   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5dcc93538d3a": (8.1478521s)
	I0915 21:02:57.167857   59380 logs.go:123] Gathering logs for coredns [edc543ebc579] ...
	I0915 21:02:57.167857   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 edc543ebc579"
	I0915 21:03:00.457615   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 edc543ebc579": (3.2897791s)
	I0915 21:03:00.459196   59380 logs.go:123] Gathering logs for kube-controller-manager [2444d44bba6b] ...
	I0915 21:03:00.459196   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 2444d44bba6b"
	I0915 21:03:02.905669   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 2444d44bba6b": (2.4464887s)
	I0915 21:03:02.906730   59380 logs.go:123] Gathering logs for kubelet ...
	I0915 21:03:02.906730   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 21:03:03.939909   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.0331857s)
	W0915 21:03:04.019616   59380 logs.go:138] Found kubelet problem: Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: E0915 21:01:39.816057    6242 pod_workers.go:190] Error syncing pod 13c53a65-1668-11ec-9ee7-0242535a2e8f ("metrics-server-8546d8b77b-pwkmq_kube-system(13c53a65-1668-11ec-9ee7-0242535a2e8f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	I0915 21:03:04.019616   59380 logs.go:123] Gathering logs for kube-proxy [b6a7846cc5e6] ...
	I0915 21:03:04.020627   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b6a7846cc5e6"
	I0915 21:03:07.698325   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 b6a7846cc5e6": (3.6775764s)
	I0915 21:03:07.699351   59380 logs.go:123] Gathering logs for kubernetes-dashboard [cf62060e4365] ...
	I0915 21:03:07.699512   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 cf62060e4365"
	I0915 21:03:11.626181   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 cf62060e4365": (3.9266946s)
	I0915 21:03:11.627244   59380 out.go:311] Setting ErrFile to fd 2480...
	I0915 21:03:11.627244   59380 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0915 21:03:11.628241   59380 out.go:242] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0915 21:03:11.628241   59380 out.go:242]   Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: E0915 21:01:39.816057    6242 pod_workers.go:190] Error syncing pod 13c53a65-1668-11ec-9ee7-0242535a2e8f ("metrics-server-8546d8b77b-pwkmq_kube-system(13c53a65-1668-11ec-9ee7-0242535a2e8f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	  Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: E0915 21:01:39.816057    6242 pod_workers.go:190] Error syncing pod 13c53a65-1668-11ec-9ee7-0242535a2e8f ("metrics-server-8546d8b77b-pwkmq_kube-system(13c53a65-1668-11ec-9ee7-0242535a2e8f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	I0915 21:03:11.628241   59380 out.go:311] Setting ErrFile to fd 2480...
	I0915 21:03:11.628241   59380 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 21:03:21.642458   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 21:03:22.962331   59380 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.3198818s)
	I0915 21:03:22.962331   59380 api_server.go:70] duration metric: took 3m6.7549273s to wait for apiserver process to appear ...
	I0915 21:03:22.962331   59380 api_server.go:86] waiting for apiserver healthz status ...
	I0915 21:03:22.973266   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 21:03:25.331696   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (2.3584456s)
	I0915 21:03:25.331696   59380 logs.go:270] 1 containers: [3537b9c6e4ed]
	I0915 21:03:25.354855   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 21:03:28.602931   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (3.2480972s)
	I0915 21:03:28.602931   59380 logs.go:270] 1 containers: [0b52d929b618]
	I0915 21:03:28.619223   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 21:03:31.478130   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (2.8589259s)
	I0915 21:03:31.478960   59380 logs.go:270] 2 containers: [56f408c7ef80 edc543ebc579]
	I0915 21:03:31.522508   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 21:03:34.377512   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (2.8550217s)
	I0915 21:03:34.377512   59380 logs.go:270] 1 containers: [f62b255531be]
	I0915 21:03:34.393177   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 21:03:35.911771   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (1.5186033s)
	I0915 21:03:35.911910   59380 logs.go:270] 1 containers: [b6a7846cc5e6]
	I0915 21:03:35.935753   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 21:03:39.083471   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (3.147738s)
	I0915 21:03:39.083471   59380 logs.go:270] 1 containers: [cf62060e4365]
	I0915 21:03:39.093501   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 21:03:43.154841   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (4.0613659s)
	I0915 21:03:43.154841   59380 logs.go:270] 1 containers: [2708aca9f909]
	I0915 21:03:43.166883   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 21:03:46.250128   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (3.0832639s)
	I0915 21:03:46.250254   59380 logs.go:270] 2 containers: [5dcc93538d3a 2444d44bba6b]
	I0915 21:03:46.250254   59380 logs.go:123] Gathering logs for describe nodes ...
	I0915 21:03:46.250254   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20210915203352-22848 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210915203352-22848
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210915203352-22848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b27ec9d5a2cd174bf43f235c940004401f58a18a94c29ec4a6ee4a2b784f75d",
	        "Created": "2021-09-15T20:38:54.6399688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211404,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-09-15T20:50:29.5187633Z",
	            "FinishedAt": "2021-09-15T20:50:10.0365142Z"
	        },
	        "Image": "sha256:83b5a81388468b1ffcd3874b4f24c1406c63c33ac07797cc8bed6ad0207d36a8",
	        "ResolvConfPath": "/var/lib/docker/containers/9b27ec9d5a2cd174bf43f235c940004401f58a18a94c29ec4a6ee4a2b784f75d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b27ec9d5a2cd174bf43f235c940004401f58a18a94c29ec4a6ee4a2b784f75d/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b27ec9d5a2cd174bf43f235c940004401f58a18a94c29ec4a6ee4a2b784f75d/hosts",
	        "LogPath": "/var/lib/docker/containers/9b27ec9d5a2cd174bf43f235c940004401f58a18a94c29ec4a6ee4a2b784f75d/9b27ec9d5a2cd174bf43f235c940004401f58a18a94c29ec4a6ee4a2b784f75d-json.log",
	        "Name": "/old-k8s-version-20210915203352-22848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210915203352-22848:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210915203352-22848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c50296d19c7373d9d15bdd50bc7ff47de1d4c184edd1ad66ed8291d8261193a3-init/diff:/var/lib/docker/overlay2/a259804ff45c264548e9459111f8eb7e789339b3253b50b62afde896e9e19e34/diff:/var/lib/docker/overlay2/61882a81480713e64bf02bef67583a0609b2be0589d08187547a88789584af86/diff:/var/lib/docker/overlay2/a41d1f5e24156c1d438fe25c567f3c3492d15cb77b1bf5545be9086be845138a/diff:/var/lib/docker/overlay2/86e30e10438032d0a02b54850ad0316347488f3d5b831234af1e91f943269850/diff:/var/lib/docker/overlay2/f6962936c0c1b0636454847e8e963a472786602e15a00d5e020827c2372acfce/diff:/var/lib/docker/overlay2/5eee83c6029359aefecbba85cc6d456e3a5a97c3ef6e9f4850e8a53c62b30ef5/diff:/var/lib/docker/overlay2/fdaa4e134ab960962e0a388adaa3a6aa59dd139cc016dfd4cdf4565bc80e8469/diff:/var/lib/docker/overlay2/9e1b9be7e17136fa81b0a224e2fab9704d3234ca119d87c14f9a676bbdb023f5/diff:/var/lib/docker/overlay2/ffe06185e93cb7ae8d48d84ea9be8817f2ae3d2aae85114ce41477579e23debd/diff:/var/lib/docker/overlay2/221713
20a621ffe79c2acb0c13308b1b0cd3bc94a4083992e7b8589b820c625c/diff:/var/lib/docker/overlay2/eb2fb3ccafd6cb1c26a9642601357b3e0563e9e9361a5ab359bf1af592a0d709/diff:/var/lib/docker/overlay2/6081368e802a14f6f6a7424eb7af3f5f29f85bf59ed0a0709ce25b53738095cb/diff:/var/lib/docker/overlay2/fd7176e5912a824a0543fa3ab5170921538a287401ff8a451c90e1ef0fd8adea/diff:/var/lib/docker/overlay2/eec5078968f5e7332ff82191a780be0efef38aef75ea7cd67723ab3d2760c281/diff:/var/lib/docker/overlay2/d18d41a44c04cb695c4b69ac0db0d5807cee4ca8a5a695629f97e2d8d9cf9461/diff:/var/lib/docker/overlay2/b125406c01cea6a83fa5515a19bb6822d1194fcd47eeb1ed541b9304804a54be/diff:/var/lib/docker/overlay2/b49ae7a2c3101c5b094f611e08fb7b68d8688cb3c333066f697aafc1dc7c2c7e/diff:/var/lib/docker/overlay2/ce599106d279966257baab0cc43ed0366d690702b449073e812a47ae6698dedf/diff:/var/lib/docker/overlay2/5f005c2e8ab4cd52b59f5118e6f5e352dd834afde547ba1ee7b71141319e3547/diff:/var/lib/docker/overlay2/2b1f9abca5d32e21fe1da66b2604d858599b74fc9359bd55e050cebccaba5c7d/diff:/var/lib/d
ocker/overlay2/a5f956d0de2a0313dfbaefb921518d8a75267b71a9e7c68207a81682db5394b5/diff:/var/lib/docker/overlay2/e0050af32b9eb0f12404cf384139cd48050d4a969d090faaa07b9f42fe954627/diff:/var/lib/docker/overlay2/f18c15fd90b361f7a13265b5426d985a47e261abde790665028916551b5218f3/diff:/var/lib/docker/overlay2/0f266ad6b65c857206fd10e121b74564370ca213f5706493619b6a590c496660/diff:/var/lib/docker/overlay2/fc044060d3681022984120753b0c02afc05afbb256dbdfc9f7f5e966e1d98820/diff:/var/lib/docker/overlay2/91df5011d1388013be2af7bb3097195366fd38d1f46d472e630aab583779f7c0/diff:/var/lib/docker/overlay2/f810a7fbc880b9ff7c367b14e34088e851fa045d860ce4bf4c49999fcf814a6e/diff:/var/lib/docker/overlay2/318584cae4acc059b81627e00ae703167673c73d234d6e64e894fc3500750f90/diff:/var/lib/docker/overlay2/a2e1d86ffb5aec517fe891619294d506621a002f4c53e8d3103d5d4ce777ebaf/diff:/var/lib/docker/overlay2/12fd1d215a6881aa03a06f2b8a5415b483530db121b120b66940e1e5cd2e1b96/diff:/var/lib/docker/overlay2/28bbbfc0404aecb7d7d79b4c2bfec07cd44260c922a982af523bda70bbd
7be20/diff:/var/lib/docker/overlay2/4dc0077174d58a8904abddfc67a48e6dd082a1eebc72518af19da37b4eff7b2c/diff:/var/lib/docker/overlay2/4d39db844b44258dbb67b16662175b453df7bfd43274abbf1968486539955750/diff:/var/lib/docker/overlay2/ca34d73c6c31358a3eb714a014a5961863e05dee505a1cfca2c8829380ce362b/diff:/var/lib/docker/overlay2/0c0595112799a0b3604c58158946fb3d0657c4198a6a72e12fbe29a74174d3ea/diff:/var/lib/docker/overlay2/5fc43276da56e90293816918613014e7cec7bedc292a062d39d034c95d56351d/diff:/var/lib/docker/overlay2/71a282cb60752128ee370ced1695c67c421341d364956818e5852fd6714a0e64/diff:/var/lib/docker/overlay2/07723c7054e35caae4987fa66d3d1fd44de0d2875612274dde2bf04e8349b0a0/diff:/var/lib/docker/overlay2/0433db88749fb49b0f02cc65b7113c97134270991a8a82bbe7ff4432aae7e502/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c50296d19c7373d9d15bdd50bc7ff47de1d4c184edd1ad66ed8291d8261193a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c50296d19c7373d9d15bdd50bc7ff47de1d4c184edd1ad66ed8291d8261193a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c50296d19c7373d9d15bdd50bc7ff47de1d4c184edd1ad66ed8291d8261193a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210915203352-22848",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210915203352-22848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210915203352-22848",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210915203352-22848",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210915203352-22848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a4ba5744cc7686e414b3dea11608c1cbf27a62a0632f8769e74344694292442",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57811"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57812"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57813"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57814"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57815"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3a4ba5744cc7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210915203352-22848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9b27ec9d5a2c",
	                        "old-k8s-version-20210915203352-22848"
	                    ],
	                    "NetworkID": "d05bee2b7d78b979c0756af91adef980ef055503d6582e76d04d9610112497ba",
	                    "EndpointID": "196f3b5e1add9aca66a26a35d443525724a171737b819f7a2def0f131919f8c9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210915203352-22848 -n old-k8s-version-20210915203352-22848

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210915203352-22848 -n old-k8s-version-20210915203352-22848: (11.2411721s)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-20210915203352-22848 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-20210915203352-22848 logs -n 25: (44.4126704s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                     |          User           | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                               | no-preload-20210915203420-22848                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:42:45 GMT | Wed, 15 Sep 2021 20:42:48 GMT |
	|         | no-preload-20210915203420-22848                   |                                                 |                         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |                         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210915203657-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:36:57 GMT | Wed, 15 Sep 2021 20:45:14 GMT |
	|         | embed-certs-20210915203657-22848                  |                                                 |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |                         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |                         |         |                               |                               |
	|         | --driver=docker                                   |                                                 |                         |         |                               |                               |
	|         | --kubernetes-version=v1.22.1                      |                                                 |                         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210915203315-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:33:16 GMT | Wed, 15 Sep 2021 20:45:31 GMT |
	|         | kubernetes-upgrade-20210915203315-22848           |                                                 |                         |         |                               |                               |
	|         | --memory=2200                                     |                                                 |                         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                 |                         |         |                               |                               |
	| stop    | -p                                                | kubernetes-upgrade-20210915203315-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:45:32 GMT | Wed, 15 Sep 2021 20:46:05 GMT |
	|         | kubernetes-upgrade-20210915203315-22848           |                                                 |                         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210915203657-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:46:15 GMT | Wed, 15 Sep 2021 20:46:29 GMT |
	|         | embed-certs-20210915203657-22848                  |                                                 |                         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |                         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |                         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210915203657-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:46:30 GMT | Wed, 15 Sep 2021 20:47:06 GMT |
	|         | embed-certs-20210915203657-22848                  |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |                         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210915203657-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:47:10 GMT | Wed, 15 Sep 2021 20:47:13 GMT |
	|         | embed-certs-20210915203657-22848                  |                                                 |                         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |                         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210915203352-22848            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:33:52 GMT | Wed, 15 Sep 2021 20:48:59 GMT |
	|         | old-k8s-version-20210915203352-22848              |                                                 |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |                         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                 |                         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |                         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                 |                         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                                 |                         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                 |                         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210915203352-22848            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:49:29 GMT | Wed, 15 Sep 2021 20:49:41 GMT |
	|         | old-k8s-version-20210915203352-22848              |                                                 |                         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |                         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |                         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210915203352-22848            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:49:42 GMT | Wed, 15 Sep 2021 20:50:13 GMT |
	|         | old-k8s-version-20210915203352-22848              |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |                         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210915203352-22848            | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:50:16 GMT | Wed, 15 Sep 2021 20:50:18 GMT |
	|         | old-k8s-version-20210915203352-22848              |                                                 |                         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |                         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210915203315-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:46:09 GMT | Wed, 15 Sep 2021 20:51:13 GMT |
	|         | kubernetes-upgrade-20210915203315-22848           |                                                 |                         |         |                               |                               |
	|         | --memory=2200                                     |                                                 |                         |         |                               |                               |
	|         | --kubernetes-version=v1.22.2-rc.0                 |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                 |                         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210915203315-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:51:15 GMT | Wed, 15 Sep 2021 20:52:25 GMT |
	|         | kubernetes-upgrade-20210915203315-22848           |                                                 |                         |         |                               |                               |
	|         | --memory=2200                                     |                                                 |                         |         |                               |                               |
	|         | --kubernetes-version=v1.22.2-rc.0                 |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                                 |                         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210915203315-22848         | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:52:25 GMT | Wed, 15 Sep 2021 20:53:02 GMT |
	|         | kubernetes-upgrade-20210915203315-22848           |                                                 |                         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210915205302-22848      | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:53:03 GMT | Wed, 15 Sep 2021 20:53:15 GMT |
	|         | disable-driver-mounts-20210915205302-22848        |                                                 |                         |         |                               |                               |
	| start   | -p                                                | no-preload-20210915203420-22848                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:42:48 GMT | Wed, 15 Sep 2021 20:57:24 GMT |
	|         | no-preload-20210915203420-22848                   |                                                 |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |                         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                 |                         |         |                               |                               |
	|         | --driver=docker                                   |                                                 |                         |         |                               |                               |
	|         | --kubernetes-version=v1.22.2-rc.0                 |                                                 |                         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210915203420-22848                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:59:31 GMT | Wed, 15 Sep 2021 20:59:39 GMT |
	|         | no-preload-20210915203420-22848                   |                                                 |                         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |                         |         |                               |                               |
	| pause   | -p                                                | no-preload-20210915203420-22848                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:59:39 GMT | Wed, 15 Sep 2021 20:59:54 GMT |
	|         | no-preload-20210915203420-22848                   |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                 |                         |         |                               |                               |
	| unpause | -p                                                | no-preload-20210915203420-22848                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 21:00:06 GMT | Wed, 15 Sep 2021 21:00:19 GMT |
	|         | no-preload-20210915203420-22848                   |                                                 |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                 |                         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210915203420-22848                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 21:00:35 GMT | Wed, 15 Sep 2021 21:01:15 GMT |
	|         | no-preload-20210915203420-22848                   |                                                 |                         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210915203420-22848                 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 21:01:16 GMT | Wed, 15 Sep 2021 21:01:29 GMT |
	|         | no-preload-20210915203420-22848                   |                                                 |                         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210915205315-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:53:15 GMT | Wed, 15 Sep 2021 21:02:10 GMT |
	|         | default-k8s-different-port-20210915205315-22848   |                                                 |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |                         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |                         |         |                               |                               |
	|         | --kubernetes-version=v1.22.1                      |                                                 |                         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210915203657-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 20:47:13 GMT | Wed, 15 Sep 2021 21:03:15 GMT |
	|         | embed-certs-20210915203657-22848                  |                                                 |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                 |                         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                 |                         |         |                               |                               |
	|         | --driver=docker                                   |                                                 |                         |         |                               |                               |
	|         | --kubernetes-version=v1.22.1                      |                                                 |                         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210915203657-22848                | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 21:03:43 GMT | Wed, 15 Sep 2021 21:03:59 GMT |
	|         | embed-certs-20210915203657-22848                  |                                                 |                         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                 |                         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210915205315-22848 | WINDOWS-SERVER-\jenkins | v1.23.0 | Wed, 15 Sep 2021 21:03:31 GMT | Wed, 15 Sep 2021 21:04:06 GMT |
	|         | default-k8s-different-port-20210915205315-22848   |                                                 |                         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |                         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |                         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 21:01:29
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 21:01:29.498656   71312 out.go:298] Setting OutFile to fd 2668 ...
	I0915 21:01:29.498656   71312 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 21:01:29.498656   71312 out.go:311] Setting ErrFile to fd 2532...
	I0915 21:01:29.498656   71312 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 21:01:29.527846   71312 out.go:305] Setting JSON to false
	I0915 21:01:29.533834   71312 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9159163,"bootTime":1622580526,"procs":158,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 21:01:29.534832   71312 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 21:01:29.538826   71312 out.go:177] * [newest-cni-20210915210129-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 21:01:29.539838   71312 notify.go:169] Checking for updates...
	I0915 21:01:29.541832   71312 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 21:01:29.543863   71312 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 21:01:25.874571   57992 logs.go:123] Gathering logs for Docker ...
	I0915 21:01:25.874571   57992 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 21:01:26.636407   57992 logs.go:123] Gathering logs for container status ...
	I0915 21:01:26.636744   57992 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 21:01:28.751029   57992 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.1142984s)
	I0915 21:01:29.704252   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:30.670252   59380 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (25.0215831s)
	I0915 21:01:29.545832   71312 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 21:01:29.547833   71312 config.go:177] Loaded profile config "default-k8s-different-port-20210915205315-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 21:01:29.547833   71312 config.go:177] Loaded profile config "embed-certs-20210915203657-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 21:01:29.548893   71312 config.go:177] Loaded profile config "old-k8s-version-20210915203352-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I0915 21:01:29.548893   71312 config.go:177] Loaded profile config "pause-20210915200708-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 21:01:29.549902   71312 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 21:01:31.777050   71312 docker.go:132] docker version: linux-20.10.5
	I0915 21:01:31.791482   71312 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 21:01:33.136955   71312 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.345481s)
	I0915 21:01:33.138771   71312 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:74 SystemTime:2021-09-15 21:01:32.5173986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 21:01:28.663534   60636 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (2.2388434s)
	I0915 21:01:28.663534   60636 logs.go:270] 1 containers: [dd15470492e2]
	I0915 21:01:28.683664   60636 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 21:01:29.893598   60636 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (1.2099417s)
	I0915 21:01:29.893598   60636 logs.go:270] 2 containers: [5c2cc55aa311 3f363fd7e539]
	I0915 21:01:29.893598   60636 logs.go:123] Gathering logs for kube-apiserver [5b512fefe3b9] ...
	I0915 21:01:29.893598   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5b512fefe3b9"
	I0915 21:01:30.996543   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5b512fefe3b9": (1.1029518s)
	I0915 21:01:31.017773   60636 logs.go:123] Gathering logs for kube-scheduler [c93c1026cc1a] ...
	I0915 21:01:31.018003   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 c93c1026cc1a"
	I0915 21:01:31.816269   60636 logs.go:123] Gathering logs for kubelet ...
	I0915 21:01:31.816520   60636 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 21:01:32.533140   60636 logs.go:123] Gathering logs for describe nodes ...
	I0915 21:01:32.533140   60636 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 21:01:30.675627   59380 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0915 21:01:30.675920   59380 addons.go:406] enableAddons completed in 1m14.4658098s
	I0915 21:01:31.930045   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:33.145988   71312 out.go:177] * Using the docker driver based on user configuration
	I0915 21:01:33.146149   71312 start.go:278] selected driver: docker
	I0915 21:01:33.146393   71312 start.go:751] validating driver "docker" against <nil>
	I0915 21:01:33.146393   71312 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 21:01:33.377854   71312 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 21:01:34.659660   71312 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2815979s)
	I0915 21:01:34.660543   71312 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:74 SystemTime:2021-09-15 21:01:34.0300921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 21:01:34.661042   71312 start_flags.go:264] no existing cluster config was found, will generate one from the flags 
	W0915 21:01:34.661678   71312 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0915 21:01:34.666347   71312 start_flags.go:756] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0915 21:01:34.666551   71312 cni.go:93] Creating CNI manager for ""
	I0915 21:01:34.666762   71312 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 21:01:34.666762   71312 start_flags.go:278] config:
	{Name:newest-cni-20210915210129-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:newest-cni-20210915210129-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 21:01:34.670746   71312 out.go:177] * Starting control plane node newest-cni-20210915210129-22848 in cluster newest-cni-20210915210129-22848
	I0915 21:01:34.670965   71312 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 21:01:31.253244   57992 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57928/healthz ...
	I0915 21:01:31.347436   57992 api_server.go:265] https://127.0.0.1:57928/healthz returned 200:
	ok
	I0915 21:01:31.365530   57992 api_server.go:139] control plane version: v1.22.1
	I0915 21:01:31.365530   57992 api_server.go:129] duration metric: took 35.6275244s to wait for apiserver health ...
	I0915 21:01:31.365530   57992 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 21:01:31.374545   57992 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 21:01:32.188285   57992 logs.go:270] 1 containers: [7e0da6df2ce9]
	I0915 21:01:32.198743   57992 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 21:01:32.988506   57992 logs.go:270] 1 containers: [cc02e6c3d1f4]
	I0915 21:01:32.999707   57992 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 21:01:33.829433   57992 logs.go:270] 1 containers: [8caa653135bf]
	I0915 21:01:33.845197   57992 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 21:01:34.676029   71312 out.go:177] * Pulling base image ...
	I0915 21:01:34.676197   71312 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 21:01:34.676520   71312 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 21:01:34.676675   71312 preload.go:147] Found local preload: C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4
	I0915 21:01:34.676896   71312 cache.go:57] Caching tarball of preloaded images
	I0915 21:01:34.677667   71312 preload.go:173] Found C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 21:01:34.678224   71312 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.2-rc.0 on docker
	I0915 21:01:34.678554   71312 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915210129-22848\config.json ...
	I0915 21:01:34.680628   71312 lock.go:36] WriteFile acquiring C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915210129-22848\config.json: {Name:mk924de5ec852f9b1c000f71eed191bf8edd47a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 21:01:35.535265   71312 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon, skipping pull
	I0915 21:01:35.535442   71312 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in daemon, skipping load
	I0915 21:01:35.535679   71312 cache.go:206] Successfully downloaded all kic artifacts
	I0915 21:01:35.536428   71312 start.go:313] acquiring machines lock for newest-cni-20210915210129-22848: {Name:mk22a38dbdfe93064c971061a28e7df34d849025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 21:01:35.536691   71312 start.go:317] acquired machines lock for "newest-cni-20210915210129-22848" in 262.4µs
	I0915 21:01:35.537281   71312 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210915210129-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:newest-cni-20210915210129-22848 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.2-rc.0 ControlPlane:true Worker:true}
	I0915 21:01:35.537537   71312 start.go:126] createHost starting for "" (driver="docker")
	I0915 21:01:35.780248   60636 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.2471285s)
	I0915 21:01:35.789820   60636 logs.go:123] Gathering logs for etcd [225ed7592959] ...
	I0915 21:01:35.790079   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 225ed7592959"
	I0915 21:01:33.957283   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:36.008270   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:38.462846   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:35.541312   71312 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0915 21:01:35.542331   71312 start.go:160] libmachine.API.Create for "newest-cni-20210915210129-22848" (driver="docker")
	I0915 21:01:35.542660   71312 client.go:168] LocalClient.Create starting
	I0915 21:01:35.543726   71312 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem
	I0915 21:01:35.544428   71312 main.go:130] libmachine: Decoding PEM data...
	I0915 21:01:35.544636   71312 main.go:130] libmachine: Parsing certificate...
	I0915 21:01:35.544636   71312 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem
	I0915 21:01:35.544636   71312 main.go:130] libmachine: Decoding PEM data...
	I0915 21:01:35.544636   71312 main.go:130] libmachine: Parsing certificate...
	I0915 21:01:35.566700   71312 cli_runner.go:115] Run: docker network inspect newest-cni-20210915210129-22848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 21:01:36.378678   71312 cli_runner.go:162] docker network inspect newest-cni-20210915210129-22848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 21:01:36.393324   71312 network_create.go:255] running [docker network inspect newest-cni-20210915210129-22848] to gather additional debugging logs...
	I0915 21:01:36.393324   71312 cli_runner.go:115] Run: docker network inspect newest-cni-20210915210129-22848
	W0915 21:01:37.233513   71312 cli_runner.go:162] docker network inspect newest-cni-20210915210129-22848 returned with exit code 1
	I0915 21:01:37.233724   71312 network_create.go:258] error running [docker network inspect newest-cni-20210915210129-22848]: docker network inspect newest-cni-20210915210129-22848: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20210915210129-22848
	I0915 21:01:37.233724   71312 network_create.go:260] output of [docker network inspect newest-cni-20210915210129-22848]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20210915210129-22848
	
	** /stderr **
	I0915 21:01:37.271659   71312 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 21:01:38.175033   71312 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006500] misses:0}
	I0915 21:01:38.175262   71312 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 21:01:38.175427   71312 network_create.go:106] attempt to create docker network newest-cni-20210915210129-22848 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 21:01:38.189485   71312 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20210915210129-22848
	W0915 21:01:39.002636   71312 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20210915210129-22848 returned with exit code 1
	W0915 21:01:39.002636   71312 network_create.go:98] failed to create docker network newest-cni-20210915210129-22848 192.168.49.0/24, will retry: subnet is taken
	I0915 21:01:39.032477   71312 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006500] amended:false}} dirty:map[] misses:0}
	I0915 21:01:39.032477   71312 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 21:01:39.064737   71312 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006500] amended:true}} dirty:map[192.168.49.0:0xc000006500 192.168.58.0:0xc00079afe0] misses:0}
	I0915 21:01:39.064933   71312 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0915 21:01:39.064933   71312 network_create.go:106] attempt to create docker network newest-cni-20210915210129-22848 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0915 21:01:39.089723   71312 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20210915210129-22848
	I0915 21:01:35.687871   57992 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (1.8426864s)
	I0915 21:01:35.687871   57992 logs.go:270] 1 containers: [0580f16f21bc]
	I0915 21:01:35.688289   57992 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 21:01:36.752581   57992 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (1.063193s)
	I0915 21:01:36.758612   57992 logs.go:270] 1 containers: [b6f05b0beae3]
	I0915 21:01:36.810473   57992 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 21:01:37.874916   57992 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (1.0644499s)
	I0915 21:01:37.875083   57992 logs.go:270] 0 containers: []
	W0915 21:01:37.875083   57992 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0915 21:01:37.895018   57992 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 21:01:39.075759   57992 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (1.180748s)
	I0915 21:01:39.075979   57992 logs.go:270] 1 containers: [63466781db67]
	I0915 21:01:39.089310   57992 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 21:01:38.846119   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 225ed7592959": (3.0557149s)
	I0915 21:01:38.906738   60636 logs.go:123] Gathering logs for kubernetes-dashboard [780e328d61ab] ...
	I0915 21:01:38.906738   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 780e328d61ab"
	I0915 21:01:41.253332   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 780e328d61ab": (2.3466091s)
	I0915 21:01:41.255552   60636 logs.go:123] Gathering logs for container status ...
	I0915 21:01:41.255808   60636 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 21:01:42.198922   60636 logs.go:123] Gathering logs for coredns [bd8a9c6d784f] ...
	I0915 21:01:42.198922   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 bd8a9c6d784f"
	I0915 21:01:40.477833   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:42.492692   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:40.362231   71312 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20210915210129-22848: (1.2720029s)
	I0915 21:01:40.362231   71312 network_create.go:90] docker network newest-cni-20210915210129-22848 192.168.58.0/24 created
	I0915 21:01:40.362388   71312 kic.go:106] calculated static IP "192.168.58.2" for the "newest-cni-20210915210129-22848" container
	I0915 21:01:40.395788   71312 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0915 21:01:41.270456   71312 cli_runner.go:115] Run: docker volume create newest-cni-20210915210129-22848 --label name.minikube.sigs.k8s.io=newest-cni-20210915210129-22848 --label created_by.minikube.sigs.k8s.io=true
	I0915 21:01:42.005703   71312 oci.go:102] Successfully created a docker volume newest-cni-20210915210129-22848
	I0915 21:01:42.025495   71312 cli_runner.go:115] Run: docker run --rm --name newest-cni-20210915210129-22848-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210915210129-22848 --entrypoint /usr/bin/test -v newest-cni-20210915210129-22848:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib
	I0915 21:01:40.805487   57992 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (1.7158412s)
	I0915 21:01:40.805487   57992 logs.go:270] 2 containers: [ac13dfebc110 e2e39e98bdf7]
	I0915 21:01:40.805487   57992 logs.go:123] Gathering logs for storage-provisioner [63466781db67] ...
	I0915 21:01:40.805724   57992 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 63466781db67"
	I0915 21:01:41.433637   57992 logs.go:123] Gathering logs for kube-controller-manager [ac13dfebc110] ...
	I0915 21:01:41.433637   57992 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 ac13dfebc110"
	I0915 21:01:42.385648   57992 logs.go:123] Gathering logs for dmesg ...
	I0915 21:01:42.385648   57992 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 21:01:42.828108   57992 logs.go:123] Gathering logs for kube-apiserver [7e0da6df2ce9] ...
	I0915 21:01:42.828108   57992 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 7e0da6df2ce9"
	I0915 21:01:43.521876   57992 logs.go:123] Gathering logs for etcd [cc02e6c3d1f4] ...
	I0915 21:01:43.521876   57992 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 cc02e6c3d1f4"
	I0915 21:01:45.239234   57992 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 cc02e6c3d1f4": (1.7171714s)
	I0915 21:01:45.300141   57992 logs.go:123] Gathering logs for coredns [8caa653135bf] ...
	I0915 21:01:45.300394   57992 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 8caa653135bf"
	I0915 21:01:43.796132   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 bd8a9c6d784f": (1.5972204s)
	I0915 21:01:43.796585   60636 logs.go:123] Gathering logs for dmesg ...
	I0915 21:01:43.796585   60636 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 21:01:44.428434   60636 logs.go:123] Gathering logs for kube-proxy [45c113f5873c] ...
	I0915 21:01:44.428434   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 45c113f5873c"
	I0915 21:01:46.398825   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 45c113f5873c": (1.9704034s)
	I0915 21:01:46.403541   60636 logs.go:123] Gathering logs for storage-provisioner [dd15470492e2] ...
	I0915 21:01:46.404017   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 dd15470492e2"
	I0915 21:01:45.004798   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:47.433645   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:47.967000   71312 cli_runner.go:168] Completed: docker run --rm --name newest-cni-20210915210129-22848-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210915210129-22848 --entrypoint /usr/bin/test -v newest-cni-20210915210129-22848:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -d /var/lib: (5.9415434s)
	I0915 21:01:47.967000   71312 oci.go:106] Successfully prepared a docker volume newest-cni-20210915210129-22848
	I0915 21:01:47.967000   71312 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 21:01:47.967000   71312 kic.go:179] Starting extracting preloaded images to volume ...
	I0915 21:01:47.978950   71312 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 21:01:47.978950   71312 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20210915210129-22848:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir
	W0915 21:01:48.884497   71312 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20210915210129-22848:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0915 21:01:48.884497   71312 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20210915210129-22848:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ������������������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	����   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15387\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__7.MoveNext() in C:\workspaces\PR-15387\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 86
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__5.MoveNext() in C:\workspaces\PR-15387\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15387\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	��������8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>�����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0915 21:01:47.061987   57992 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 8caa653135bf": (1.7616039s)
	I0915 21:01:47.063214   57992 logs.go:123] Gathering logs for kube-scheduler [0580f16f21bc] ...
	I0915 21:01:47.063528   57992 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 0580f16f21bc"
	I0915 21:01:49.613042   57992 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 0580f16f21bc": (2.549531s)
	I0915 21:01:49.624978   57992 logs.go:123] Gathering logs for kube-proxy [b6f05b0beae3] ...
	I0915 21:01:49.624978   57992 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b6f05b0beae3"
	I0915 21:01:48.550176   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 dd15470492e2": (2.1461734s)
	I0915 21:01:48.550176   60636 logs.go:123] Gathering logs for kube-controller-manager [5c2cc55aa311] ...
	I0915 21:01:48.550176   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5c2cc55aa311"
	I0915 21:01:52.067855   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5c2cc55aa311": (3.517396s)
	I0915 21:01:52.092440   60636 logs.go:123] Gathering logs for kube-controller-manager [3f363fd7e539] ...
	I0915 21:01:52.092440   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 3f363fd7e539"
	I0915 21:01:49.452363   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:51.525296   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:49.282901   71312 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.3039593s)
	I0915 21:01:49.283168   71312 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:true NGoroutines:77 SystemTime:2021-09-15 21:01:48.7132168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 21:01:49.295572   71312 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 21:01:50.536199   71312 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.2405023s)
	I0915 21:01:50.548217   71312 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20210915210129-22848 --name newest-cni-20210915210129-22848 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210915210129-22848 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20210915210129-22848 --network newest-cni-20210915210129-22848 --ip 192.168.58.2 --volume newest-cni-20210915210129-22848:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56
	I0915 21:01:50.695770   57992 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 b6f05b0beae3": (1.0707987s)
	I0915 21:01:50.698768   57992 logs.go:123] Gathering logs for kube-controller-manager [e2e39e98bdf7] ...
	I0915 21:01:50.698768   57992 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 e2e39e98bdf7"
	I0915 21:01:53.241317   57992 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 e2e39e98bdf7": (2.5422604s)
	I0915 21:01:53.294874   57992 logs.go:123] Gathering logs for Docker ...
	I0915 21:01:53.295103   57992 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 21:01:54.028175   57992 logs.go:123] Gathering logs for container status ...
	I0915 21:01:54.028175   57992 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 21:01:55.053013   57992 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.0248446s)
	I0915 21:01:55.056423   57992 logs.go:123] Gathering logs for kubelet ...
	I0915 21:01:55.056605   57992 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 21:01:55.545051   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 3f363fd7e539": (3.4526333s)
	I0915 21:01:55.573045   60636 logs.go:123] Gathering logs for Docker ...
	I0915 21:01:55.573045   60636 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 21:01:53.947935   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:56.027075   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:58.606645   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:56.313372   71312 cli_runner.go:168] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20210915210129-22848 --name newest-cni-20210915210129-22848 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210915210129-22848 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20210915210129-22848 --network newest-cni-20210915210129-22848 --ip 192.168.58.2 --volume newest-cni-20210915210129-22848:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56: (5.7651915s)
	I0915 21:01:56.325370   71312 cli_runner.go:115] Run: docker container inspect newest-cni-20210915210129-22848 --format={{.State.Running}}
	I0915 21:01:57.234231   71312 cli_runner.go:115] Run: docker container inspect newest-cni-20210915210129-22848 --format={{.State.Status}}
	I0915 21:01:58.025052   71312 cli_runner.go:115] Run: docker exec newest-cni-20210915210129-22848 stat /var/lib/dpkg/alternatives/iptables
	I0915 21:01:55.930904   57992 logs.go:123] Gathering logs for describe nodes ...
	I0915 21:01:55.930904   57992 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 21:01:58.750289   60636 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:57730/healthz ...
	I0915 21:01:58.898254   60636 api_server.go:265] https://127.0.0.1:57730/healthz returned 200:
	ok
	I0915 21:01:59.241783   60636 api_server.go:139] control plane version: v1.22.1
	I0915 21:01:59.241940   60636 api_server.go:129] duration metric: took 48.8965337s to wait for apiserver health ...
	I0915 21:01:59.241940   60636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 21:01:59.256633   60636 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 21:02:02.785974   60636 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (3.5293638s)
	I0915 21:02:02.786332   60636 logs.go:270] 1 containers: [5b512fefe3b9]
	I0915 21:02:02.829228   60636 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 21:02:00.749044   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:02:02.986676   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:01:59.473575   71312 cli_runner.go:168] Completed: docker exec newest-cni-20210915210129-22848 stat /var/lib/dpkg/alternatives/iptables: (1.448533s)
	I0915 21:01:59.473575   71312 oci.go:281] the created container "newest-cni-20210915210129-22848" has a running status.
	I0915 21:01:59.474123   71312 kic.go:210] Creating ssh key for kic: C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915210129-22848\id_rsa...
	I0915 21:01:59.731261   71312 kic_runner.go:188] docker (temp): C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915210129-22848\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 21:02:01.561699   71312 cli_runner.go:115] Run: docker container inspect newest-cni-20210915210129-22848 --format={{.State.Status}}
	I0915 21:02:02.422706   71312 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 21:02:02.422706   71312 kic_runner.go:115] Args: [docker exec --privileged newest-cni-20210915210129-22848 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 21:02:03.912553   71312 kic_runner.go:124] Done: [docker exec --privileged newest-cni-20210915210129-22848 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.4898573s)
	I0915 21:02:03.918156   71312 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915210129-22848\id_rsa...
	I0915 21:02:06.402733   60636 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (3.5735284s)
	I0915 21:02:06.402733   60636 logs.go:270] 1 containers: [225ed7592959]
	I0915 21:02:06.419837   60636 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 21:02:05.604697   59380 pod_ready.go:102] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"False"
	I0915 21:02:08.342362   59380 pod_ready.go:92] pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.342362   59380 pod_ready.go:81] duration metric: took 1m37.0743164s waiting for pod "coredns-fb8b8dccf-pxwb4" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.342628   59380 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.431670   59380 pod_ready.go:92] pod "etcd-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.431837   59380 pod_ready.go:81] duration metric: took 89.2095ms waiting for pod "etcd-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.431837   59380 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.530628   59380 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.530628   59380 pod_ready.go:81] duration metric: took 98.7918ms waiting for pod "kube-apiserver-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.530628   59380 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.623612   59380 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.623744   59380 pod_ready.go:81] duration metric: took 93.1164ms waiting for pod "kube-controller-manager-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.623744   59380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fjmvd" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:04.806925   71312 cli_runner.go:115] Run: docker container inspect newest-cni-20210915210129-22848 --format={{.State.Status}}
	I0915 21:02:05.663179   71312 machine.go:88] provisioning docker machine ...
	I0915 21:02:05.663977   71312 ubuntu.go:169] provisioning hostname "newest-cni-20210915210129-22848"
	I0915 21:02:05.678790   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:06.548813   71312 main.go:130] libmachine: Using SSH client type: native
	I0915 21:02:06.559314   71312 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 58050 <nil> <nil>}
	I0915 21:02:06.559509   71312 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210915210129-22848 && echo "newest-cni-20210915210129-22848" | sudo tee /etc/hostname
	I0915 21:02:07.359353   71312 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210915210129-22848
	
	I0915 21:02:07.375049   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:08.126047   71312 main.go:130] libmachine: Using SSH client type: native
	I0915 21:02:08.126047   71312 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 58050 <nil> <nil>}
	I0915 21:02:08.126047   71312 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210915210129-22848' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210915210129-22848/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210915210129-22848' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 21:02:08.867685   71312 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0915 21:02:08.867847   71312 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\minikube-integration\.minikube CaCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\minikube-integration\.minikube}
	I0915 21:02:08.867847   71312 ubuntu.go:177] setting up certificates
	I0915 21:02:08.867847   71312 provision.go:83] configureAuth start
	I0915 21:02:08.881391   71312 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210915210129-22848
	I0915 21:02:07.025787   57992 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (11.0949547s)
	I0915 21:02:09.591724   57992 system_pods.go:59] 7 kube-system pods found
	I0915 21:02:09.591724   57992 system_pods.go:61] "coredns-78fcd69978-8t7vx" [acd23737-160a-4681-bfa9-dca716e5d9db] Running
	I0915 21:02:09.591724   57992 system_pods.go:61] "etcd-default-k8s-different-port-20210915205315-22848" [9a3f18f6-09c1-4077-a874-96026dbcac7e] Running
	I0915 21:02:09.591724   57992 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210915205315-22848" [873c66c1-96bd-4857-bb27-2c1ce18d8771] Running
	I0915 21:02:09.591724   57992 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210915205315-22848" [1d847019-cbd3-4be6-aa0d-6fd934cc7ddf] Running
	I0915 21:02:09.591724   57992 system_pods.go:61] "kube-proxy-tt8jd" [bd8a85d9-160c-4399-ae10-d7d8621870ba] Running
	I0915 21:02:09.591724   57992 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210915205315-22848" [23c71740-6bdd-46e8-88f5-9ff955deb18b] Running
	I0915 21:02:09.591724   57992 system_pods.go:61] "storage-provisioner" [dc3d7b88-6efe-4b41-a4c0-4580c90d3bdc] Running
	I0915 21:02:09.591724   57992 system_pods.go:74] duration metric: took 38.2264389s to wait for pod list to return data ...
	I0915 21:02:09.591724   57992 default_sa.go:34] waiting for default service account to be created ...
	I0915 21:02:09.618788   57992 default_sa.go:45] found service account: "default"
	I0915 21:02:09.618788   57992 default_sa.go:55] duration metric: took 27.0643ms for default service account to be created ...
	I0915 21:02:09.618788   57992 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 21:02:09.699848   57992 system_pods.go:86] 7 kube-system pods found
	I0915 21:02:09.700006   57992 system_pods.go:89] "coredns-78fcd69978-8t7vx" [acd23737-160a-4681-bfa9-dca716e5d9db] Running
	I0915 21:02:09.700006   57992 system_pods.go:89] "etcd-default-k8s-different-port-20210915205315-22848" [9a3f18f6-09c1-4077-a874-96026dbcac7e] Running
	I0915 21:02:09.700006   57992 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210915205315-22848" [873c66c1-96bd-4857-bb27-2c1ce18d8771] Running
	I0915 21:02:09.700006   57992 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210915205315-22848" [1d847019-cbd3-4be6-aa0d-6fd934cc7ddf] Running
	I0915 21:02:09.700006   57992 system_pods.go:89] "kube-proxy-tt8jd" [bd8a85d9-160c-4399-ae10-d7d8621870ba] Running
	I0915 21:02:09.700006   57992 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210915205315-22848" [23c71740-6bdd-46e8-88f5-9ff955deb18b] Running
	I0915 21:02:09.700006   57992 system_pods.go:89] "storage-provisioner" [dc3d7b88-6efe-4b41-a4c0-4580c90d3bdc] Running
	I0915 21:02:09.700006   57992 system_pods.go:126] duration metric: took 81.2184ms to wait for k8s-apps to be running ...
	I0915 21:02:09.700006   57992 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 21:02:09.718525   57992 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 21:02:09.868994   57992 system_svc.go:56] duration metric: took 168.989ms WaitForService to wait for kubelet.
	I0915 21:02:09.868994   57992 kubeadm.go:547] duration metric: took 3m8.5474799s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0915 21:02:09.868994   57992 node_conditions.go:102] verifying NodePressure condition ...
	I0915 21:02:09.909349   57992 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 21:02:09.909584   57992 node_conditions.go:123] node cpu capacity is 4
	I0915 21:02:09.909584   57992 node_conditions.go:105] duration metric: took 40.5899ms to run NodePressure ...
	I0915 21:02:09.909751   57992 start.go:231] waiting for startup goroutines ...
	I0915 21:02:10.165395   57992 start.go:462] kubectl: 1.20.0, cluster: 1.22.1 (minor skew: 2)
	I0915 21:02:10.169631   57992 out.go:177] 
	W0915 21:02:10.193593   57992 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1.
	I0915 21:02:10.198575   57992 out.go:177]   - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
	I0915 21:02:10.203496   57992 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210915205315-22848" cluster and "default" namespace by default
	I0915 21:02:09.039489   60636 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (2.6193127s)
	I0915 21:02:09.039489   60636 logs.go:270] 1 containers: [bd8a9c6d784f]
	I0915 21:02:09.049864   60636 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 21:02:10.941102   60636 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (1.890878s)
	I0915 21:02:10.941102   60636 logs.go:270] 1 containers: [c93c1026cc1a]
	I0915 21:02:10.970092   60636 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 21:02:08.682303   59380 pod_ready.go:92] pod "kube-proxy-fjmvd" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.682576   59380 pod_ready.go:81] duration metric: took 58.736ms waiting for pod "kube-proxy-fjmvd" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.682576   59380 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.764335   59380 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210915203352-22848" in "kube-system" namespace has status "Ready":"True"
	I0915 21:02:08.764335   59380 pod_ready.go:81] duration metric: took 81.7595ms waiting for pod "kube-scheduler-old-k8s-version-20210915203352-22848" in "kube-system" namespace to be "Ready" ...
	I0915 21:02:08.764335   59380 pod_ready.go:38] duration metric: took 1m42.6827872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 21:02:08.764335   59380 api_server.go:50] waiting for apiserver process to appear ...
	I0915 21:02:08.788795   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 21:02:11.886042   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (3.0970689s)
	I0915 21:02:11.886149   59380 logs.go:270] 1 containers: [3537b9c6e4ed]
	I0915 21:02:11.900760   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 21:02:09.631726   71312 provision.go:138] copyHostCerts
	I0915 21:02:09.632738   71312 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/ca.pem, removing ...
	I0915 21:02:09.632738   71312 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\ca.pem
	I0915 21:02:09.632738   71312 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 21:02:09.634742   71312 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/cert.pem, removing ...
	I0915 21:02:09.634742   71312 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\cert.pem
	I0915 21:02:09.635742   71312 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 21:02:09.637741   71312 exec_runner.go:145] found C:\Users\jenkins\minikube-integration\.minikube/key.pem, removing ...
	I0915 21:02:09.637741   71312 exec_runner.go:208] rm: C:\Users\jenkins\minikube-integration\.minikube\key.pem
	I0915 21:02:09.638743   71312 exec_runner.go:152] cp: C:\Users\jenkins\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins\minikube-integration\.minikube/key.pem (1675 bytes)
	I0915 21:02:09.639741   71312 provision.go:112] generating server cert: C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-20210915210129-22848 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210915210129-22848]
	I0915 21:02:09.859069   71312 provision.go:172] copyRemoteCerts
	I0915 21:02:09.876086   71312 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 21:02:09.890070   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:10.688626   71312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58050 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915210129-22848\id_rsa Username:docker}
	I0915 21:02:11.144232   71312 ssh_runner.go:192] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2681545s)
	I0915 21:02:11.144828   71312 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0915 21:02:11.307838   71312 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 21:02:11.565420   71312 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 21:02:11.737230   71312 provision.go:86] duration metric: configureAuth took 2.869402s
	I0915 21:02:11.737230   71312 ubuntu.go:193] setting minikube options for container-runtime
	I0915 21:02:11.737892   71312 config.go:177] Loaded profile config "newest-cni-20210915210129-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.2-rc.0
	I0915 21:02:11.742616   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:12.650152   71312 main.go:130] libmachine: Using SSH client type: native
	I0915 21:02:12.650744   71312 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 58050 <nil> <nil>}
	I0915 21:02:12.650902   71312 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 21:02:13.206639   71312 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 21:02:13.206863   71312 ubuntu.go:71] root file system type: overlay
	I0915 21:02:13.207210   71312 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 21:02:13.217715   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:14.076472   71312 main.go:130] libmachine: Using SSH client type: native
	I0915 21:02:14.076472   71312 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 58050 <nil> <nil>}
	I0915 21:02:14.076472   71312 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 21:02:13.250820   60636 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (2.2807428s)
	I0915 21:02:13.251041   60636 logs.go:270] 1 containers: [45c113f5873c]
	I0915 21:02:13.254186   60636 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 21:02:14.972340   60636 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (1.7181659s)
	I0915 21:02:14.972469   60636 logs.go:270] 1 containers: [780e328d61ab]
	I0915 21:02:14.992056   60636 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 21:02:17.412975   60636 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (2.4200496s)
	I0915 21:02:17.413096   60636 logs.go:270] 1 containers: [dd15470492e2]
	I0915 21:02:17.446750   60636 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 21:02:14.320564   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (2.4194268s)
	I0915 21:02:14.320721   59380 logs.go:270] 1 containers: [0b52d929b618]
	I0915 21:02:14.340316   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 21:02:16.462510   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (2.1221147s)
	I0915 21:02:16.462510   59380 logs.go:270] 2 containers: [56f408c7ef80 edc543ebc579]
	I0915 21:02:16.481520   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 21:02:17.909577   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (1.4279284s)
	I0915 21:02:17.909577   59380 logs.go:270] 1 containers: [f62b255531be]
	I0915 21:02:17.922156   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 21:02:14.673467   71312 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 21:02:14.701532   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:15.535422   71312 main.go:130] libmachine: Using SSH client type: native
	I0915 21:02:15.536146   71312 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x11facc0] 0x11fdb80 <nil>  [] 0s} 127.0.0.1 58050 <nil> <nil>}
	I0915 21:02:15.536146   71312 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 21:02:19.765952   60636 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (2.3190791s)
	I0915 21:02:19.766191   60636 logs.go:270] 2 containers: [5c2cc55aa311 3f363fd7e539]
	I0915 21:02:19.766375   60636 logs.go:123] Gathering logs for kube-apiserver [5b512fefe3b9] ...
	I0915 21:02:19.766375   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5b512fefe3b9"
	I0915 21:02:21.969273   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5b512fefe3b9": (2.2029126s)
	I0915 21:02:21.991142   60636 logs.go:123] Gathering logs for etcd [225ed7592959] ...
	I0915 21:02:21.991142   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 225ed7592959"
	I0915 21:02:19.029541   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (1.1073925s)
	I0915 21:02:19.030029   59380 logs.go:270] 1 containers: [b6a7846cc5e6]
	I0915 21:02:19.057397   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 21:02:20.309647   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (1.2511439s)
	I0915 21:02:20.309647   59380 logs.go:270] 1 containers: [cf62060e4365]
	I0915 21:02:20.325297   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 21:02:22.494620   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (2.1690651s)
	I0915 21:02:22.494620   59380 logs.go:270] 1 containers: [2708aca9f909]
	I0915 21:02:22.513090   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 21:02:22.819430   71312 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-09-15 21:02:14.653086000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0915 21:02:22.819430   71312 machine.go:91] provisioned docker machine in 17.1563625s
	I0915 21:02:22.819430   71312 client.go:171] LocalClient.Create took 47.2770749s
	I0915 21:02:22.819664   71312 start.go:168] duration metric: libmachine.API.Create for "newest-cni-20210915210129-22848" took 47.277638s
	I0915 21:02:22.819664   71312 start.go:267] post-start starting for "newest-cni-20210915210129-22848" (driver="docker")
	I0915 21:02:22.819664   71312 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 21:02:22.841502   71312 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 21:02:22.851387   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:23.711808   71312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58050 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915210129-22848\id_rsa Username:docker}
	I0915 21:02:25.218412   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 225ed7592959": (3.2272912s)
	I0915 21:02:25.292510   60636 logs.go:123] Gathering logs for container status ...
	I0915 21:02:25.292510   60636 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 21:02:26.452902   60636 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.1603996s)
	I0915 21:02:26.453922   60636 logs.go:123] Gathering logs for dmesg ...
	I0915 21:02:26.453922   60636 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 21:02:27.577219   60636 ssh_runner.go:192] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (1.1233037s)
	I0915 21:02:27.577219   60636 logs.go:123] Gathering logs for coredns [bd8a9c6d784f] ...
	I0915 21:02:27.577219   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 bd8a9c6d784f"
	I0915 21:02:23.899916   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (1.3868346s)
	I0915 21:02:23.900058   59380 logs.go:270] 2 containers: [5dcc93538d3a 2444d44bba6b]
	I0915 21:02:23.900058   59380 logs.go:123] Gathering logs for kube-apiserver [3537b9c6e4ed] ...
	I0915 21:02:23.900058   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 3537b9c6e4ed"
	I0915 21:02:26.584619   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 3537b9c6e4ed": (2.6845789s)
	I0915 21:02:26.628390   59380 logs.go:123] Gathering logs for etcd [0b52d929b618] ...
	I0915 21:02:26.628390   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 0b52d929b618"
	I0915 21:02:24.118327   71312 ssh_runner.go:192] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2766951s)
	I0915 21:02:24.152027   71312 ssh_runner.go:152] Run: cat /etc/os-release
	I0915 21:02:24.204076   71312 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 21:02:24.204251   71312 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 21:02:24.204385   71312 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 21:02:24.204385   71312 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0915 21:02:24.204385   71312 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\addons for local assets ...
	I0915 21:02:24.204922   71312 filesync.go:126] Scanning C:\Users\jenkins\minikube-integration\.minikube\files for local assets ...
	I0915 21:02:24.206378   71312 filesync.go:149] local asset: C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem -> 228482.pem in /etc/ssl/certs
	I0915 21:02:24.230742   71312 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
	I0915 21:02:24.321807   71312 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\files\etc\ssl\certs\228482.pem --> /etc/ssl/certs/228482.pem (1708 bytes)
	I0915 21:02:24.563612   71312 start.go:270] post-start completed in 1.7439584s
	I0915 21:02:24.582605   71312 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210915210129-22848
	I0915 21:02:25.320802   71312 profile.go:148] Saving config to C:\Users\jenkins\minikube-integration\.minikube\profiles\newest-cni-20210915210129-22848\config.json ...
	I0915 21:02:25.350199   71312 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 21:02:25.359979   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:26.197254   71312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58050 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915210129-22848\id_rsa Username:docker}
	I0915 21:02:26.542917   71312 ssh_runner.go:192] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1927259s)
	I0915 21:02:26.543042   71312 start.go:129] duration metric: createHost completed in 51.0058333s
	I0915 21:02:26.543042   71312 start.go:80] releasing machines lock for "newest-cni-20210915210129-22848", held for 51.0063581s
	I0915 21:02:26.560670   71312 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210915210129-22848
	I0915 21:02:27.523676   71312 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0915 21:02:27.537720   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:27.552138   71312 ssh_runner.go:152] Run: systemctl --version
	I0915 21:02:27.563143   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:28.384022   71312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58050 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915210129-22848\id_rsa Username:docker}
	I0915 21:02:28.420395   71312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58050 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\newest-cni-20210915210129-22848\id_rsa Username:docker}
	I0915 21:02:28.775210   71312 ssh_runner.go:192] Completed: systemctl --version: (1.2230797s)
	I0915 21:02:28.794440   71312 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
	I0915 21:02:30.293345   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 bd8a9c6d784f": (2.7158756s)
	I0915 21:02:30.293616   60636 logs.go:123] Gathering logs for kube-proxy [45c113f5873c] ...
	I0915 21:02:30.293616   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 45c113f5873c"
	I0915 21:02:29.188968   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 0b52d929b618": (2.5605947s)
	I0915 21:02:29.222295   59380 logs.go:123] Gathering logs for coredns [56f408c7ef80] ...
	I0915 21:02:29.222653   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 56f408c7ef80"
	I0915 21:02:30.913622   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 56f408c7ef80": (1.6909792s)
	I0915 21:02:30.914656   59380 logs.go:123] Gathering logs for Docker ...
	I0915 21:02:30.914656   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 21:02:31.073495   59380 logs.go:123] Gathering logs for container status ...
	I0915 21:02:31.073495   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 21:02:32.132766   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.0592772s)
	I0915 21:02:32.134065   59380 logs.go:123] Gathering logs for dmesg ...
	I0915 21:02:32.134065   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 21:02:32.761531   59380 logs.go:123] Gathering logs for describe nodes ...
	I0915 21:02:32.761774   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 21:02:29.232877   71312 ssh_runner.go:192] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.7092123s)
	I0915 21:02:29.254275   71312 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 21:02:29.409130   71312 cruntime.go:255] skipping containerd shutdown because we are bound to it
	I0915 21:02:29.427839   71312 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
	I0915 21:02:29.533831   71312 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 21:02:29.738307   71312 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
	I0915 21:02:30.581903   71312 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
	I0915 21:02:31.624980   71312 ssh_runner.go:192] Completed: sudo systemctl enable docker.socket: (1.0430844s)
	I0915 21:02:31.643975   71312 ssh_runner.go:152] Run: sudo systemctl cat docker.service
	I0915 21:02:31.900633   71312 ssh_runner.go:152] Run: sudo systemctl daemon-reload
	I0915 21:02:32.986333   71312 ssh_runner.go:192] Completed: sudo systemctl daemon-reload: (1.0857068s)
	I0915 21:02:33.011636   71312 ssh_runner.go:152] Run: sudo systemctl start docker
	I0915 21:02:33.142677   71312 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 21:02:33.621182   71312 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
	I0915 21:02:34.075598   71312 out.go:204] * Preparing Kubernetes v1.22.2-rc.0 on Docker 20.10.8 ...
	I0915 21:02:34.094423   71312 cli_runner.go:115] Run: docker exec -t newest-cni-20210915210129-22848 dig +short host.docker.internal
	I0915 21:02:35.705263   71312 cli_runner.go:168] Completed: docker exec -t newest-cni-20210915210129-22848 dig +short host.docker.internal: (1.6108511s)
	I0915 21:02:35.705263   71312 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0915 21:02:35.721787   71312 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0915 21:02:35.810215   71312 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 21:02:36.040817   71312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210915210129-22848
	I0915 21:02:36.844223   71312 out.go:177]   - kubelet.network-plugin=cni
	I0915 21:02:33.225118   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 45c113f5873c": (2.9315214s)
	I0915 21:02:33.225514   60636 logs.go:123] Gathering logs for describe nodes ...
	I0915 21:02:33.225514   60636 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 21:02:36.847964   71312 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0915 21:02:36.848507   71312 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 21:02:36.861442   71312 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 21:02:37.370821   71312 docker.go:558] Got preloaded images: 
	I0915 21:02:37.371009   71312 docker.go:564] k8s.gcr.io/kube-apiserver:v1.22.2-rc.0 wasn't preloaded
	I0915 21:02:37.385934   71312 ssh_runner.go:152] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0915 21:02:37.530391   71312 ssh_runner.go:152] Run: which lz4
	I0915 21:02:37.599862   71312 ssh_runner.go:152] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0915 21:02:37.639466   71312 ssh_runner.go:309] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0915 21:02:37.640111   71312 ssh_runner.go:319] scp C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (540035829 bytes)
	I0915 21:02:42.084060   60636 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (8.8586035s)
	I0915 21:02:42.088584   60636 logs.go:123] Gathering logs for kubernetes-dashboard [780e328d61ab] ...
	I0915 21:02:42.088774   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 780e328d61ab"
	I0915 21:02:40.222423   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (7.4606972s)
	I0915 21:02:40.248362   59380 logs.go:123] Gathering logs for kube-scheduler [f62b255531be] ...
	I0915 21:02:40.250291   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 f62b255531be"
	I0915 21:02:45.538641   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 780e328d61ab": (3.4498893s)
	I0915 21:02:45.539921   60636 logs.go:123] Gathering logs for storage-provisioner [dd15470492e2] ...
	I0915 21:02:45.539921   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 dd15470492e2"
	I0915 21:02:44.722028   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 f62b255531be": (4.4715535s)
	I0915 21:02:44.739482   59380 logs.go:123] Gathering logs for storage-provisioner [2708aca9f909] ...
	I0915 21:02:44.739702   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 2708aca9f909"
	I0915 21:02:50.757829   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 dd15470492e2": (5.2179427s)
	I0915 21:02:50.759507   60636 logs.go:123] Gathering logs for kube-controller-manager [5c2cc55aa311] ...
	I0915 21:02:50.759684   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5c2cc55aa311"
	I0915 21:02:48.982771   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 2708aca9f909": (4.2430962s)
	I0915 21:02:48.982771   59380 logs.go:123] Gathering logs for kube-controller-manager [5dcc93538d3a] ...
	I0915 21:02:48.982771   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 5dcc93538d3a"
	I0915 21:02:57.130767   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5dcc93538d3a": (8.1478521s)
	I0915 21:02:57.167857   59380 logs.go:123] Gathering logs for coredns [edc543ebc579] ...
	I0915 21:02:57.167857   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 edc543ebc579"
	I0915 21:02:58.304395   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 5c2cc55aa311": (7.5447603s)
	I0915 21:02:58.329179   60636 logs.go:123] Gathering logs for Docker ...
	I0915 21:02:58.329179   60636 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0915 21:02:59.517340   60636 ssh_runner.go:192] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (1.1879461s)
	I0915 21:02:59.540019   60636 logs.go:123] Gathering logs for kube-scheduler [c93c1026cc1a] ...
	I0915 21:02:59.540174   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 c93c1026cc1a"
	I0915 21:03:00.457615   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 edc543ebc579": (3.2897791s)
	I0915 21:03:00.459196   59380 logs.go:123] Gathering logs for kube-controller-manager [2444d44bba6b] ...
	I0915 21:03:00.459196   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 2444d44bba6b"
	I0915 21:03:02.905669   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 2444d44bba6b": (2.4464887s)
	I0915 21:03:02.906730   59380 logs.go:123] Gathering logs for kubelet ...
	I0915 21:03:02.906730   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 21:03:03.344788   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 c93c1026cc1a": (3.8046393s)
	I0915 21:03:03.349747   60636 logs.go:123] Gathering logs for kube-controller-manager [3f363fd7e539] ...
	I0915 21:03:03.349747   60636 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 3f363fd7e539"
	I0915 21:03:03.939909   59380 ssh_runner.go:192] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.0331857s)
	W0915 21:03:04.019616   59380 logs.go:138] Found kubelet problem: Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: E0915 21:01:39.816057    6242 pod_workers.go:190] Error syncing pod 13c53a65-1668-11ec-9ee7-0242535a2e8f ("metrics-server-8546d8b77b-pwkmq_kube-system(13c53a65-1668-11ec-9ee7-0242535a2e8f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	I0915 21:03:04.019616   59380 logs.go:123] Gathering logs for kube-proxy [b6a7846cc5e6] ...
	I0915 21:03:04.020627   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 b6a7846cc5e6"
	I0915 21:03:07.698325   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 b6a7846cc5e6": (3.6775764s)
	I0915 21:03:07.699351   59380 logs.go:123] Gathering logs for kubernetes-dashboard [cf62060e4365] ...
	I0915 21:03:07.699512   59380 ssh_runner.go:152] Run: /bin/bash -c "docker logs --tail 400 cf62060e4365"
	I0915 21:03:10.216457   60636 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 3f363fd7e539": (6.8665889s)
	I0915 21:03:10.236783   60636 logs.go:123] Gathering logs for kubelet ...
	I0915 21:03:10.236783   60636 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 21:03:11.553417   60636 ssh_runner.go:192] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.3164683s)
	I0915 21:03:11.626181   59380 ssh_runner.go:192] Completed: /bin/bash -c "docker logs --tail 400 cf62060e4365": (3.9266946s)
	I0915 21:03:11.627244   59380 out.go:311] Setting ErrFile to fd 2480...
	I0915 21:03:11.627244   59380 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0915 21:03:11.628241   59380 out.go:242] X Problems detected in kubelet:
	W0915 21:03:11.628241   59380 out.go:242]   Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: E0915 21:01:39.816057    6242 pod_workers.go:190] Error syncing pod 13c53a65-1668-11ec-9ee7-0242535a2e8f ("metrics-server-8546d8b77b-pwkmq_kube-system(13c53a65-1668-11ec-9ee7-0242535a2e8f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	I0915 21:03:11.628241   59380 out.go:311] Setting ErrFile to fd 2480...
	I0915 21:03:11.628241   59380 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 21:03:14.324659   60636 system_pods.go:59] 8 kube-system pods found
	I0915 21:03:14.324659   60636 system_pods.go:61] "coredns-78fcd69978-b6m27" [3fc7acc8-56a7-4172-9ecd-8109469c4dc3] Running
	I0915 21:03:14.324659   60636 system_pods.go:61] "etcd-embed-certs-20210915203657-22848" [32e94a70-1e65-4e2f-aef0-e4d70db9d83d] Running
	I0915 21:03:14.324659   60636 system_pods.go:61] "kube-apiserver-embed-certs-20210915203657-22848" [beade954-ec00-4a60-b123-7664ff8b8c40] Running
	I0915 21:03:14.324659   60636 system_pods.go:61] "kube-controller-manager-embed-certs-20210915203657-22848" [42f80b13-022a-4925-b978-11e841b8bea3] Running
	I0915 21:03:14.324659   60636 system_pods.go:61] "kube-proxy-v6dbz" [6b78a2c4-c67f-40cc-a21f-175097dcfd11] Running
	I0915 21:03:14.324659   60636 system_pods.go:61] "kube-scheduler-embed-certs-20210915203657-22848" [d746244e-81e0-4c15-ad1f-c02392773ff3] Running
	I0915 21:03:14.324659   60636 system_pods.go:61] "metrics-server-7c784ccb57-ljjc6" [9ca7acff-feb6-49d1-b680-6341086852fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 21:03:14.324659   60636 system_pods.go:61] "storage-provisioner" [ccb11020-cc00-4835-8ea0-a2d46f6ac8cd] Running
	I0915 21:03:14.324659   60636 system_pods.go:74] duration metric: took 1m15.0832059s to wait for pod list to return data ...
	I0915 21:03:14.324659   60636 default_sa.go:34] waiting for default service account to be created ...
	I0915 21:03:14.422148   60636 default_sa.go:45] found service account: "default"
	I0915 21:03:14.422148   60636 default_sa.go:55] duration metric: took 97.4894ms for default service account to be created ...
	I0915 21:03:14.422313   60636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 21:03:14.483718   60636 system_pods.go:86] 8 kube-system pods found
	I0915 21:03:14.483944   60636 system_pods.go:89] "coredns-78fcd69978-b6m27" [3fc7acc8-56a7-4172-9ecd-8109469c4dc3] Running
	I0915 21:03:14.483944   60636 system_pods.go:89] "etcd-embed-certs-20210915203657-22848" [32e94a70-1e65-4e2f-aef0-e4d70db9d83d] Running
	I0915 21:03:14.483944   60636 system_pods.go:89] "kube-apiserver-embed-certs-20210915203657-22848" [beade954-ec00-4a60-b123-7664ff8b8c40] Running
	I0915 21:03:14.483944   60636 system_pods.go:89] "kube-controller-manager-embed-certs-20210915203657-22848" [42f80b13-022a-4925-b978-11e841b8bea3] Running
	I0915 21:03:14.483944   60636 system_pods.go:89] "kube-proxy-v6dbz" [6b78a2c4-c67f-40cc-a21f-175097dcfd11] Running
	I0915 21:03:14.483944   60636 system_pods.go:89] "kube-scheduler-embed-certs-20210915203657-22848" [d746244e-81e0-4c15-ad1f-c02392773ff3] Running
	I0915 21:03:14.484174   60636 system_pods.go:89] "metrics-server-7c784ccb57-ljjc6" [9ca7acff-feb6-49d1-b680-6341086852fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 21:03:14.484299   60636 system_pods.go:89] "storage-provisioner" [ccb11020-cc00-4835-8ea0-a2d46f6ac8cd] Running
	I0915 21:03:14.484400   60636 system_pods.go:126] duration metric: took 62.0873ms to wait for k8s-apps to be running ...
	I0915 21:03:14.484400   60636 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 21:03:14.526937   60636 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 21:03:15.168833   60636 system_svc.go:56] duration metric: took 684.2716ms WaitForService to wait for kubelet.
	I0915 21:03:15.168982   60636 kubeadm.go:547] duration metric: took 4m18.4807443s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0915 21:03:15.168982   60636 node_conditions.go:102] verifying NodePressure condition ...
	I0915 21:03:15.261303   60636 node_conditions.go:122] node storage ephemeral capacity is 65792556Ki
	I0915 21:03:15.261487   60636 node_conditions.go:123] node cpu capacity is 4
	I0915 21:03:15.261751   60636 node_conditions.go:105] duration metric: took 92.7691ms to run NodePressure ...
	I0915 21:03:15.262136   60636 start.go:231] waiting for startup goroutines ...
	I0915 21:03:15.610865   60636 start.go:462] kubectl: 1.20.0, cluster: 1.22.1 (minor skew: 2)
	I0915 21:03:15.615095   60636 out.go:177] 
	W0915 21:03:15.624359   60636 out.go:242] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1.
	I0915 21:03:15.630134   60636 out.go:177]   - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
	I0915 21:03:15.634566   60636 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210915203657-22848" cluster and "default" namespace by default
	I0915 21:03:21.642458   59380 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 21:03:22.962331   59380 ssh_runner.go:192] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.3198818s)
	I0915 21:03:22.962331   59380 api_server.go:70] duration metric: took 3m6.7549273s to wait for apiserver process to appear ...
	I0915 21:03:22.962331   59380 api_server.go:86] waiting for apiserver healthz status ...
	I0915 21:03:22.973266   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 21:03:25.331696   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (2.3584456s)
	I0915 21:03:25.331696   59380 logs.go:270] 1 containers: [3537b9c6e4ed]
	I0915 21:03:25.354855   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 21:03:28.602931   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (3.2480972s)
	I0915 21:03:28.602931   59380 logs.go:270] 1 containers: [0b52d929b618]
	I0915 21:03:28.619223   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 21:03:31.478130   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (2.8589259s)
	I0915 21:03:31.478960   59380 logs.go:270] 2 containers: [56f408c7ef80 edc543ebc579]
	I0915 21:03:31.522508   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 21:03:34.377512   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (2.8550217s)
	I0915 21:03:34.377512   59380 logs.go:270] 1 containers: [f62b255531be]
	I0915 21:03:34.393177   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 21:03:35.911771   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (1.5186033s)
	I0915 21:03:35.911910   59380 logs.go:270] 1 containers: [b6a7846cc5e6]
	I0915 21:03:35.935753   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0915 21:03:39.083471   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (3.147738s)
	I0915 21:03:39.083471   59380 logs.go:270] 1 containers: [cf62060e4365]
	I0915 21:03:39.093501   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 21:03:43.154841   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (4.0613659s)
	I0915 21:03:43.154841   59380 logs.go:270] 1 containers: [2708aca9f909]
	I0915 21:03:43.166883   59380 ssh_runner.go:152] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 21:03:46.250128   59380 ssh_runner.go:192] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (3.0832639s)
	I0915 21:03:46.250254   59380 logs.go:270] 2 containers: [5dcc93538d3a 2444d44bba6b]
	I0915 21:03:46.250254   59380 logs.go:123] Gathering logs for describe nodes ...
	I0915 21:03:46.250254   59380 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-09-15 20:50:33 UTC, end at Wed 2021-09-15 21:04:39 UTC. --
	Sep 15 20:57:58 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:57:58.425947300Z" level=info msg="ignoring event" container=56ee26d2d802f2506e662823720e9fe7dbc1eef4f710efafd2e8e61b4033b510 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:57:59 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:57:59.596047000Z" level=info msg="ignoring event" container=ccb8aef11c354521450bf9a6831ff63af7b5abf658fc65e6440a111b4ef3c8ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:01 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:01.010745800Z" level=info msg="ignoring event" container=2beb62466c63110c58ff11bdaaff94058570df1158a18d99db6d60de26269de3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:02 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:02.102844200Z" level=info msg="ignoring event" container=2438e41344f6aaaaf8cb11f307387a78ca44255f1d3489bec338d40a9d95379f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:03 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:03.121466800Z" level=info msg="ignoring event" container=feef17f2f4101fbcbdf992fecb64617535a192dc0b2715fc300d3d07fdcdde75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:03 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:03.939445200Z" level=info msg="ignoring event" container=efdc8b6ad4e83db59d1cb39a988de65ddc36bd610d29f06aeb834a0177c3790a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:04 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:04.953238100Z" level=info msg="ignoring event" container=2c067aeb5ba34686d6da5c86200ff4049d81427db0da877be8f7ec40bafdba7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:06 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:06.140677100Z" level=info msg="ignoring event" container=73357eba53e81044f79da41d275871f464579b561580c02566e6e0c3a8e86821 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:07 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:07.642640000Z" level=info msg="ignoring event" container=a980a1327edecf7860597138bf3b60c5f7007754c93e601c3547aeb75ce69928 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:08 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:08.512471500Z" level=info msg="ignoring event" container=24ded2f0bc5fd547e70c880e83f0b489d4370c336b8196a79eb791b1d08c9c84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:09 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:09.750678100Z" level=info msg="ignoring event" container=635446c5c939457df157ed6248a9678aa6d0b08a8cb665f675ff428f80234848 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:10 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:10.588631000Z" level=info msg="ignoring event" container=8c7e8f6108b1e5641b5db89590d893dd234df882a8762e39982cc0061ec4c617 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:11 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:11.541403500Z" level=info msg="ignoring event" container=835c124cbb5238b109fb89a83db01675230fed0a3526b82346a45b354b29a00b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:58:12 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:58:12.307269900Z" level=info msg="ignoring event" container=14e5df03f4638129578faba821e29236da7fdf85b81216695522fa3128b3bd03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 20:59:28 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T20:59:28.641374700Z" level=info msg="ignoring event" container=2444d44bba6bcad13f8a3b53c12828e6dcc9f1e27cfb8d61dc7f78aefc1bab46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 21:00:34 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T21:00:34.074065700Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 15 21:00:34 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T21:00:34.082045300Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 15 21:00:35 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T21:00:35.529516300Z" level=error msg="048d22dff159e586f83048bee5732b80c7d47dc9bd43d743799997e173fd66e3 cleanup: failed to delete container from containerd: no such container"
	Sep 15 21:01:26 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T21:01:26.357836300Z" level=info msg="ignoring event" container=edc543ebc579c6ea305637e052e85c6e98ef03c01ee9911f04b440841033db54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 21:01:39 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T21:01:39.687514200Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 15 21:01:39 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T21:01:39.687576900Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 15 21:01:39 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T21:01:39.759521400Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 15 21:01:53 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T21:01:53.093208400Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Sep 15 21:01:53 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T21:01:53.644419600Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Sep 15 21:02:15 old-k8s-version-20210915203352-22848 dockerd[214]: time="2021-09-15T21:02:15.373659500Z" level=info msg="Download failed, retrying (1/5): unexpected EOF"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	cf62060e4365a       9a07b5b4bfac0       2 minutes ago       Running             kubernetes-dashboard      0                   a0de183f3af6f
	56f408c7ef80a       eb516548c180f       2 minutes ago       Running             coredns                   1                   79e7a6ac35951
	2708aca9f9090       6e38f40d628db       3 minutes ago       Running             storage-provisioner       0                   ffccf2c6a501f
	edc543ebc579c       eb516548c180f       3 minutes ago       Exited              coredns                   0                   79e7a6ac35951
	b6a7846cc5e6f       5cd54e388abaf       4 minutes ago       Running             kube-proxy                0                   7fd4d7ebc1ffb
	5dcc93538d3a2       b95b1efa0436b       5 minutes ago       Running             kube-controller-manager   1                   9d3be78e963ef
	3537b9c6e4edd       ecf910f40d6e0       6 minutes ago       Running             kube-apiserver            0                   5e3b3c9e3c200
	f62b255531be6       00638a24688b0       6 minutes ago       Running             kube-scheduler            0                   35bd3c2d2864d
	2444d44bba6bc       b95b1efa0436b       6 minutes ago       Exited              kube-controller-manager   0                   9d3be78e963ef
	0b52d929b6188       2c4adeb21b4ff       6 minutes ago       Running             etcd                      0                   e04fd4934d629
	
	* 
	* ==> coredns [56f408c7ef80] <==
	* .:53
	2021-09-15T21:01:54.683Z [INFO] CoreDNS-1.3.1
	2021-09-15T21:01:54.683Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-09-15T21:01:54.683Z [INFO] plugin/reload: Running configuration MD5 = 0d0ce6260bc0c3008bf1060be1c57923
	
	* 
	* ==> coredns [edc543ebc579] <==
	* .:53
	2021-09-15T21:01:08.338Z [INFO] CoreDNS-1.3.1
	2021-09-15T21:01:08.338Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-09-15T21:01:08.338Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
	E0915 21:01:24.346717       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0915 21:01:24.346717       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-pxwb4.unknownuser.log.ERROR.20210915-210124.1: no such file or directory
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210915203352-22848
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210915203352-22848
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d321606059ead2904f4f5ddd59a9a7026c7ee04
	                    minikube.k8s.io/name=old-k8s-version-20210915203352-22848
	                    minikube.k8s.io/updated_at=2021_09_15T20_59_49_0700
	                    minikube.k8s.io/version=v1.23.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Sep 2021 20:59:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Sep 2021 21:04:00 +0000   Wed, 15 Sep 2021 20:59:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Sep 2021 21:04:00 +0000   Wed, 15 Sep 2021 20:59:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Sep 2021 21:04:00 +0000   Wed, 15 Sep 2021 20:59:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 15 Sep 2021 21:04:00 +0000   Wed, 15 Sep 2021 20:59:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-20210915203352-22848
	Capacity:
	 cpu:                4
	 ephemeral-storage:  65792556Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             20481980Ki
	 pods:               110
	Allocatable:
	 cpu:                4
	 ephemeral-storage:  65792556Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             20481980Ki
	 pods:               110
	System Info:
	 Machine ID:                 4b5e5cdd53d44f5ab575bb522d42acca
	 System UUID:                a6b6ae6e-adde-4157-81eb-073f2cc75b74
	 Boot ID:                    7b7b18db-3e3e-49d3-a2cb-ac38329b7bd9
	 Kernel Version:             4.19.121-linuxkit
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.8
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                            ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-fb8b8dccf-pxwb4                                         100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m32s
	  kube-system                etcd-old-k8s-version-20210915203352-22848                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                kube-apiserver-old-k8s-version-20210915203352-22848             250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                kube-controller-manager-old-k8s-version-20210915203352-22848    200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                kube-proxy-fjmvd                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                kube-scheduler-old-k8s-version-20210915203352-22848             100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                metrics-server-8546d8b77b-pwkmq                                 100m (2%!)(MISSING)     0 (0%!)(MISSING)      300Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m22s
	  kube-system                storage-provisioner                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-2mvzw                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-8p7lf                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (18%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From                                              Message
	  ----    ------                   ----                  ----                                              -------
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m11s)  kubelet, old-k8s-version-20210915203352-22848     Node old-k8s-version-20210915203352-22848 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x8 over 6m11s)  kubelet, old-k8s-version-20210915203352-22848     Node old-k8s-version-20210915203352-22848 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x7 over 6m11s)  kubelet, old-k8s-version-20210915203352-22848     Node old-k8s-version-20210915203352-22848 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m41s                 kube-proxy, old-k8s-version-20210915203352-22848  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000000]  hv_stimer0_isr+0x20/0x2d
	[  +0.000000]  hv_stimer0_vector_handler+0x3b/0x57
	[  +0.000000]  hv_stimer0_callback_vector+0xf/0x20
	[  +0.000000]  </IRQ>
	[  +0.000000] RIP: 0010:arch_local_irq_enable+0x7/0x8
	[  +0.000000] Code: ef ff ff 0f 20 d8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 48 89 f8 0f 1f 40 00 c3 fb 66 0f 1f 44 00 00 <c3> 0f 1f 44 00 00 40 f6 c7 02 74 12 48 b8 ff 0f 00 00 00 00 f0 ff
	[  +0.000000] RSP: 0000:ffffbcaf423f7ee0 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff12
	[  +0.000000] RAX: 0000000080000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000000] RDX: 000055a9735499db RSI: 0000000000000004 RDI: ffffbcaf423f7f58
	[  +0.000000] RBP: ffffbcaf423f7f58 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000004
	[  +0.000000] R13: 000055a9735499db R14: ffff97d483b18dc0 R15: ffff97d4e4dc7400
	[  +0.000000]  __do_page_fault+0x17f/0x42d
	[  +0.000000]  ? page_fault+0x8/0x30
	[  +0.000000]  page_fault+0x1e/0x30
	[  +0.000000] RIP: 0033:0x55a9730c8f03
	[  +0.000000] Code: 0f 6f d9 66 0f ef 0d ec 85 97 00 66 0f ef 15 f4 85 97 00 66 0f ef 1d fc 85 97 00 66 0f 38 dc c9 66 0f 38 dc d2 66 0f 38 dc db <f3> 0f 6f 20 f3 0f 6f 68 10 f3 0f 6f 74 08 e0 f3 0f 6f 7c 08 f0 66
	[  +0.000000] RSP: 002b:000000c00004bdc8 EFLAGS: 00010287
	[  +0.000000] RAX: 000055a9735499db RBX: 000055a9730cb860 RCX: 0000000000000022
	[  +0.000000] RDX: 000000c00004bde0 RSI: 000000c00004be48 RDI: 000000c000080868
	[  +0.000000] RBP: 000000c00004be28 R08: 000055a97353d681 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000004 R11: 000000c0000807d0 R12: 000000000000001a
	[  +0.000000] R13: 0000000000000006 R14: 0000000000000008 R15: 0000000000000017
	[  +0.000000] ---[ end trace cdbbbbc925f6eff0 ]---
	[Sep15 20:45] tee (198366): /proc/194820/oom_adj is deprecated, please use /proc/194820/oom_score_adj instead.
	
	* 
	* ==> etcd [0b52d929b618] <==
	* 2021-09-15 21:02:06.294535 W | etcdserver: read-only range request "key:\"/registry/clusterroles\" range_end:\"/registry/clusterrolet\" count_only:true " with result "range_response_count:0 size:7" took too long (178.035ms) to execute
	2021-09-15 21:02:06.561438 I | embed: rejected connection from "127.0.0.1:39362" (error "EOF", ServerName "")
	2021-09-15 21:02:07.257426 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210915203352-22848\" " with result "range_response_count:1 size:3284" took too long (123.1187ms) to execute
	2021-09-15 21:02:07.877931 W | etcdserver: read-only range request "key:\"/registry/secrets\" range_end:\"/registry/secrett\" count_only:true " with result "range_response_count:0 size:7" took too long (415.9755ms) to execute
	2021-09-15 21:02:07.894717 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (199.3305ms) to execute
	2021-09-15 21:02:34.859092 W | etcdserver: request "header:<ID:9722563420904418568 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.85.2\" mod_revision:590 > success:<request_put:<key:\"/registry/masterleases/192.168.85.2\" value_size:67 lease:499191384049642758 >> failure:<request_range:<key:\"/registry/masterleases/192.168.85.2\" > >>" with result "size:16" took too long (134.4731ms) to execute
	2021-09-15 21:02:46.549813 W | etcdserver: request "header:<ID:9722563420904418602 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:06ed7beb4136ed29>" with result "size:40" took too long (334.5257ms) to execute
	2021-09-15 21:02:49.725821 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:766" took too long (114.4536ms) to execute
	2021-09-15 21:02:56.983932 W | etcdserver: request "header:<ID:9722563420904418640 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/old-k8s-version-20210915203352-22848\" mod_revision:546 > success:<request_put:<key:\"/registry/minions/old-k8s-version-20210915203352-22848\" value_size:3208 >> failure:<request_range:<key:\"/registry/minions/old-k8s-version-20210915203352-22848\" > >>" with result "size:16" took too long (134.3574ms) to execute
	2021-09-15 21:03:00.596276 W | etcdserver: read-only range request "key:\"/registry/networkpolicies\" range_end:\"/registry/networkpoliciet\" count_only:true " with result "range_response_count:0 size:5" took too long (126.168ms) to execute
	2021-09-15 21:03:05.039330 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:766" took too long (123.2504ms) to execute
	2021-09-15 21:03:07.801275 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (131.9686ms) to execute
	2021-09-15 21:03:12.487095 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (148.5938ms) to execute
	2021-09-15 21:03:15.423874 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:133" took too long (138.4128ms) to execute
	2021-09-15 21:03:28.190213 W | etcdserver: read-only range request "key:\"/registry/services/specs\" range_end:\"/registry/services/spect\" count_only:true " with result "range_response_count:0 size:7" took too long (183.9059ms) to execute
	2021-09-15 21:03:42.525164 W | etcdserver: request "header:<ID:9722563420904418777 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:637 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:678 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (113.1488ms) to execute
	2021-09-15 21:03:48.011084 W | etcdserver: read-only range request "key:\"/registry/roles\" range_end:\"/registry/rolet\" count_only:true " with result "range_response_count:0 size:7" took too long (190.3614ms) to execute
	2021-09-15 21:03:59.164807 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:766" took too long (135.7852ms) to execute
	2021-09-15 21:04:01.988721 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:766" took too long (268.7992ms) to execute
	2021-09-15 21:04:01.996409 W | etcdserver: read-only range request "key:\"/registry/secrets\" range_end:\"/registry/secrett\" count_only:true " with result "range_response_count:0 size:7" took too long (312.8341ms) to execute
	2021-09-15 21:04:24.066954 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (285.0402ms) to execute
	2021-09-15 21:04:25.139587 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (175.5346ms) to execute
	2021-09-15 21:04:25.756785 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (129.7052ms) to execute
	2021-09-15 21:04:28.674437 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (328.7151ms) to execute
	2021-09-15 21:04:28.727438 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:5" took too long (414.8575ms) to execute
	
	* 
	* ==> kernel <==
	*  21:04:43 up  2:39,  0 users,  load average: 66.17, 57.87, 43.81
	Linux old-k8s-version-20210915203352-22848 4.19.121-linuxkit #1 SMP Thu Jan 21 15:36:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [3537b9c6e4ed] <==
	* I0915 21:04:32.529719       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:33.221563       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:33.540687       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:34.242858       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:34.245288       1 trace.go:81] Trace[573432358]: "List /apis/batch/v1beta1/cronjobs" (started: 2021-09-15 21:04:33.740872 +0000 UTC m=+347.681523601) (total time: 504.2192ms):
	Trace[573432358]: [485.5604ms] [485.4784ms] Listing from storage done
	I0915 21:04:34.556024       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:35.266693       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:35.571910       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:36.267425       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:36.579603       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:37.276275       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:37.580318       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:38.277698       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:38.588742       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:39.282418       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:39.592744       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:40.284089       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:40.605114       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:41.298811       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:41.606469       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:42.299410       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:42.608548       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0915 21:04:43.300993       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0915 21:04:43.609793       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	
	* 
	* ==> kube-controller-manager [2444d44bba6b] <==
	* I0915 20:58:49.637692       1 serving.go:319] Generated self-signed cert in-memory
	I0915 20:58:56.555015       1 controllermanager.go:155] Version: v1.14.0
	I0915 20:58:56.576218       1 secure_serving.go:116] Serving securely on 127.0.0.1:10257
	I0915 20:58:56.583465       1 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252
	F0915 20:59:23.652734       1 controllermanager.go:213] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User "system:kube-controller-manager" cannot get path "/healthz"
	
	* 
	* ==> kube-controller-manager [5dcc93538d3a] <==
	* I0915 21:01:26.996575       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"164f2dcd-1668-11ec-9ee7-0242535a2e8f", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0915 21:01:26.996648       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"16c36f33-1668-11ec-9ee7-0242535a2e8f", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0915 21:01:27.048553       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0915 21:01:27.050128       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0915 21:01:27.050317       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"16c36f33-1668-11ec-9ee7-0242535a2e8f", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0915 21:01:27.050430       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"164f2dcd-1668-11ec-9ee7-0242535a2e8f", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0915 21:01:27.217749       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0915 21:01:27.219854       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"16c36f33-1668-11ec-9ee7-0242535a2e8f", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0915 21:01:27.239153       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0915 21:01:27.240583       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"16c36f33-1668-11ec-9ee7-0242535a2e8f", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0915 21:01:28.462615       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"164f2dcd-1668-11ec-9ee7-0242535a2e8f", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-2mvzw
	I0915 21:01:28.795730       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"16c36f33-1668-11ec-9ee7-0242535a2e8f", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-8p7lf
	E0915 21:01:40.878042       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0915 21:01:43.498596       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0915 21:02:11.209520       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0915 21:02:15.556647       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0915 21:02:41.553423       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0915 21:02:47.575119       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0915 21:03:11.826104       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0915 21:03:19.632005       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0915 21:03:42.194332       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0915 21:03:51.645105       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0915 21:04:12.484519       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0915 21:04:23.663773       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0915 21:04:42.770830       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [b6a7846cc5e6] <==
	* W0915 21:00:54.501001       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0915 21:00:55.030056       1 server_others.go:148] Using iptables Proxier.
	I0915 21:00:55.032253       1 server_others.go:178] Tearing down inactive rules.
	E0915 21:00:58.200967       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0915 21:00:59.844320       1 server.go:555] Version: v1.14.0
	I0915 21:01:01.530012       1 config.go:202] Starting service config controller
	I0915 21:01:01.537130       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0915 21:01:01.538438       1 config.go:102] Starting endpoints config controller
	I0915 21:01:01.538491       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0915 21:01:01.756092       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0915 21:01:01.854379       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-scheduler [f62b255531be] <==
	* E0915 20:59:27.551711       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 20:59:27.592419       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 20:59:27.867730       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 20:59:28.087350       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 20:59:28.095370       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 20:59:28.156173       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 20:59:28.422172       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 20:59:28.420014       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 20:59:28.488109       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 20:59:28.621372       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 20:59:28.623520       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 20:59:28.623620       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 20:59:28.879737       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 20:59:29.099230       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 20:59:29.119527       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 20:59:29.181759       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 20:59:29.446114       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 20:59:29.530597       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 20:59:29.530769       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 20:59:29.654570       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 20:59:29.670562       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 20:59:29.670706       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 20:59:29.882997       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0915 20:59:31.619088       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0915 20:59:31.727653       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-09-15 20:50:33 UTC, end at Wed 2021-09-15 21:04:48 UTC. --
	Sep 15 21:00:50 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:00:50.611399    6242 pod_container_deletor.go:75] Container "79e7a6ac35951f5944ad0ffe265aaa418f2d0542cf7da68ba780d4ca8c9e6aa8" not found in pod's containers
	Sep 15 21:00:54 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:00:54.983430    6242 pod_container_deletor.go:75] Container "7fd4d7ebc1ffb9db3c02502f8a7cdba0a5716997c7761fb7e9f3a68a31fdfbf3" not found in pod's containers
	Sep 15 21:01:20 old-k8s-version-20210915203352-22848 kubelet[6242]: I0915 21:01:20.235661    6242 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/13467966-1668-11ec-9ee7-0242535a2e8f-tmp") pod "storage-provisioner" (UID: "13467966-1668-11ec-9ee7-0242535a2e8f")
	Sep 15 21:01:20 old-k8s-version-20210915203352-22848 kubelet[6242]: I0915 21:01:20.235761    6242 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-62qjl" (UniqueName: "kubernetes.io/secret/13467966-1668-11ec-9ee7-0242535a2e8f-storage-provisioner-token-62qjl") pod "storage-provisioner" (UID: "13467966-1668-11ec-9ee7-0242535a2e8f")
	Sep 15 21:01:23 old-k8s-version-20210915203352-22848 kubelet[6242]: I0915 21:01:23.433794    6242 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/13c53a65-1668-11ec-9ee7-0242535a2e8f-tmp-dir") pod "metrics-server-8546d8b77b-pwkmq" (UID: "13c53a65-1668-11ec-9ee7-0242535a2e8f")
	Sep 15 21:01:23 old-k8s-version-20210915203352-22848 kubelet[6242]: I0915 21:01:23.471968    6242 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-pwdl7" (UniqueName: "kubernetes.io/secret/13c53a65-1668-11ec-9ee7-0242535a2e8f-metrics-server-token-pwdl7") pod "metrics-server-8546d8b77b-pwkmq" (UID: "13c53a65-1668-11ec-9ee7-0242535a2e8f")
	Sep 15 21:01:29 old-k8s-version-20210915203352-22848 kubelet[6242]: I0915 21:01:29.609073    6242 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/18789695-1668-11ec-9ee7-0242535a2e8f-tmp-volume") pod "kubernetes-dashboard-5d8978d65d-8p7lf" (UID: "18789695-1668-11ec-9ee7-0242535a2e8f")
	Sep 15 21:01:29 old-k8s-version-20210915203352-22848 kubelet[6242]: I0915 21:01:29.633018    6242 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-89vzx" (UniqueName: "kubernetes.io/secret/1868c3ae-1668-11ec-9ee7-0242535a2e8f-kubernetes-dashboard-token-89vzx") pod "dashboard-metrics-scraper-5b494cc544-2mvzw" (UID: "1868c3ae-1668-11ec-9ee7-0242535a2e8f")
	Sep 15 21:01:29 old-k8s-version-20210915203352-22848 kubelet[6242]: I0915 21:01:29.675966    6242 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1868c3ae-1668-11ec-9ee7-0242535a2e8f-tmp-volume") pod "dashboard-metrics-scraper-5b494cc544-2mvzw" (UID: "1868c3ae-1668-11ec-9ee7-0242535a2e8f")
	Sep 15 21:01:29 old-k8s-version-20210915203352-22848 kubelet[6242]: I0915 21:01:29.676853    6242 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-89vzx" (UniqueName: "kubernetes.io/secret/18789695-1668-11ec-9ee7-0242535a2e8f-kubernetes-dashboard-token-89vzx") pod "kubernetes-dashboard-5d8978d65d-8p7lf" (UID: "18789695-1668-11ec-9ee7-0242535a2e8f")
	Sep 15 21:01:30 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:30.361117    6242 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r343d9355ba284f2fb0c7a84ecbfb4f67.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r343d9355ba284f2fb0c7a84ecbfb4f67.scope: no such file or directory
	Sep 15 21:01:30 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:30.382445    6242 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r343d9355ba284f2fb0c7a84ecbfb4f67.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r343d9355ba284f2fb0c7a84ecbfb4f67.scope: no such file or directory
	Sep 15 21:01:30 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:30.580124    6242 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu/system.slice/run-r7fc63d8f9e7d426093ebd88b5364076f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu/system.slice/run-r7fc63d8f9e7d426093ebd88b5364076f.scope: no such file or directory
	Sep 15 21:01:30 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:30.690159    6242 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r7fc63d8f9e7d426093ebd88b5364076f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r7fc63d8f9e7d426093ebd88b5364076f.scope: no such file or directory
	Sep 15 21:01:30 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:30.832450    6242 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r7fc63d8f9e7d426093ebd88b5364076f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r7fc63d8f9e7d426093ebd88b5364076f.scope: no such file or directory
	Sep 15 21:01:31 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:30.995514    6242 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r7fc63d8f9e7d426093ebd88b5364076f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r7fc63d8f9e7d426093ebd88b5364076f.scope: no such file or directory
	Sep 15 21:01:31 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:31.813566    6242 pod_container_deletor.go:75] Container "ffccf2c6a501f9281b076ddce286ed28d2cd1649040535a09521bacf16ea662a" not found in pod's containers
	Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:39.025673    6242 pod_container_deletor.go:75] Container "64d34c2031150e385339f8f5e9028123682b0224177c194d682c0ff61f66c64e" not found in pod's containers
	Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:39.314758    6242 container.go:409] Failed to create summary reader for "/system.slice/run-r343d9355ba284f2fb0c7a84ecbfb4f67.scope": none of the resources are being tracked.
	Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: E0915 21:01:39.771555    6242 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: E0915 21:01:39.815630    6242 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: E0915 21:01:39.815888    6242 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Sep 15 21:01:39 old-k8s-version-20210915203352-22848 kubelet[6242]: E0915 21:01:39.816057    6242 pod_workers.go:190] Error syncing pod 13c53a65-1668-11ec-9ee7-0242535a2e8f ("metrics-server-8546d8b77b-pwkmq_kube-system(13c53a65-1668-11ec-9ee7-0242535a2e8f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Sep 15 21:01:46 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:46.144802    6242 pod_container_deletor.go:75] Container "58a86504bd857ab953eff1cf4fad276027cc52ec88e3187c494390015e6482e8" not found in pod's containers
	Sep 15 21:01:52 old-k8s-version-20210915203352-22848 kubelet[6242]: W0915 21:01:52.828676    6242 pod_container_deletor.go:75] Container "a0de183f3af6f31e5711b9ea96d18e60af999f3cb5cdc6b6855615c20ddae4e1" not found in pod's containers
	
	* 
	* ==> kubernetes-dashboard [cf62060e4365] <==
	* 2021/09/15 21:02:08 Starting overwatch
	2021/09/15 21:02:08 Using namespace: kubernetes-dashboard
	2021/09/15 21:02:08 Using in-cluster config to connect to apiserver
	2021/09/15 21:02:08 Using secret token for csrf signing
	2021/09/15 21:02:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/09/15 21:02:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/09/15 21:02:08 Successful initial request to the apiserver, version: v1.14.0
	2021/09/15 21:02:08 Generating JWE encryption key
	2021/09/15 21:02:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/09/15 21:02:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/09/15 21:02:14 Initializing JWE encryption key from synchronized object
	2021/09/15 21:02:14 Creating in-cluster Sidecar client
	2021/09/15 21:02:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/09/15 21:02:15 Serving insecurely on HTTP port: 9090
	2021/09/15 21:02:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/09/15 21:03:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/09/15 21:03:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/09/15 21:04:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/09/15 21:04:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [2708aca9f909] <==
	* I0915 21:01:40.753119       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 21:01:41.321793       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 21:01:41.321949       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 21:01:41.635473       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 21:01:41.635782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210915203352-22848_b70da16a-b32b-48c4-89a2-6508f6543fc9!
	I0915 21:01:41.747743       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"10021bd7-1668-11ec-9ee7-0242535a2e8f", APIVersion:"v1", ResourceVersion:"537", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210915203352-22848_b70da16a-b32b-48c4-89a2-6508f6543fc9 became leader
	I0915 21:01:42.246000       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210915203352-22848_b70da16a-b32b-48c4-89a2-6508f6543fc9!
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect old-k8s-version-20210915203352-22848 --format={{.State.Status}}" took an unusually long time: 2.7778674s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20210915203352-22848 -n old-k8s-version-20210915203352-22848

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20210915203352-22848 -n old-k8s-version-20210915203352-22848: (8.8482586s)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210915203352-22848 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-pwkmq dashboard-metrics-scraper-5b494cc544-2mvzw
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210915203352-22848 describe pod metrics-server-8546d8b77b-pwkmq dashboard-metrics-scraper-5b494cc544-2mvzw
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210915203352-22848 describe pod metrics-server-8546d8b77b-pwkmq dashboard-metrics-scraper-5b494cc544-2mvzw: exit status 1 (433.6146ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-pwkmq" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5b494cc544-2mvzw" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210915203352-22848 describe pod metrics-server-8546d8b77b-pwkmq dashboard-metrics-scraper-5b494cc544-2mvzw: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (882.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20210915202329-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p auto-20210915202329-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: context deadline exceeded (0s)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/auto/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20210915202338-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p false-20210915202338-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: context deadline exceeded (193.9µs)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/false/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20210915202338-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20210915202338-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: context deadline exceeded (363.7µs)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/cilium/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20210915202338-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20210915202338-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: context deadline exceeded (0s)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/calico/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-weave-20210915202338-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p custom-weave-20210915202338-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker: context deadline exceeded (0s)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20210915202329-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p enable-default-cni-20210915202329-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: context deadline exceeded (0s)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20210915202338-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20210915202338-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: context deadline exceeded (155.7µs)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/kindnet/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20210915202329-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-20210915202329-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: context deadline exceeded (204.3µs)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/bridge/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20210915202329-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-20210915202329-22848 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: context deadline exceeded (162µs)
net_test.go:101: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/kubenet/Start (0.00s)

                                                
                                    

Test pass (193/232)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 15.54
4 TestDownloadOnly/v1.14.0/preload-exists 0.01
7 TestDownloadOnly/v1.14.0/kubectl 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.76
10 TestDownloadOnly/v1.22.1/json-events 12.72
11 TestDownloadOnly/v1.22.1/preload-exists 0
14 TestDownloadOnly/v1.22.1/kubectl 0
15 TestDownloadOnly/v1.22.1/LogsDuration 0.67
17 TestDownloadOnly/v1.22.2-rc.0/json-events 12.73
18 TestDownloadOnly/v1.22.2-rc.0/preload-exists 0
21 TestDownloadOnly/v1.22.2-rc.0/kubectl 0
22 TestDownloadOnly/v1.22.2-rc.0/LogsDuration 0.66
23 TestDownloadOnly/DeleteAll 7.16
24 TestDownloadOnly/DeleteAlwaysSucceeds 4.65
25 TestDownloadOnlyKic 44.59
26 TestOffline 685.36
28 TestAddons/Setup 755.72
31 TestAddons/parallel/Ingress 67.29
32 TestAddons/parallel/MetricsServer 14.45
33 TestAddons/parallel/HelmTiller 50.48
34 TestAddons/parallel/Olm 375.13
35 TestAddons/parallel/CSI 221.72
36 TestAddons/parallel/GCPAuth 300.13
37 TestAddons/StoppedEnableDisable 31.31
39 TestDockerFlags 578.62
40 TestForceSystemdFlag 388.48
41 TestForceSystemdEnv 577.5
46 TestErrorSpam/setup 189.51
47 TestErrorSpam/start 13.99
48 TestErrorSpam/status 15.07
49 TestErrorSpam/pause 14.12
50 TestErrorSpam/unpause 15.34
51 TestErrorSpam/stop 30.04
54 TestFunctional/serial/CopySyncFile 0.08
55 TestFunctional/serial/StartWithProxy 198.55
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 28.21
58 TestFunctional/serial/KubeContext 0.17
59 TestFunctional/serial/KubectlGetPods 0.46
62 TestFunctional/serial/CacheCmd/cache/add_remote 17.35
63 TestFunctional/serial/CacheCmd/cache/add_local 8.89
64 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.54
65 TestFunctional/serial/CacheCmd/cache/list 0.5
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 4.73
67 TestFunctional/serial/CacheCmd/cache/cache_reload 19.61
68 TestFunctional/serial/CacheCmd/cache/delete 0.94
69 TestFunctional/serial/MinikubeKubectlCmd 2.59
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.07
71 TestFunctional/serial/ExtraConfig 121.03
72 TestFunctional/serial/ComponentHealth 0.37
73 TestFunctional/serial/LogsCmd 8.59
74 TestFunctional/serial/LogsFileCmd 8.5
76 TestFunctional/parallel/ConfigCmd 3.08
79 TestFunctional/parallel/InternationalLanguage 4.06
84 TestFunctional/parallel/AddonsCmd 3.48
85 TestFunctional/parallel/PersistentVolumeClaim 107.11
87 TestFunctional/parallel/SSHCmd 11.75
88 TestFunctional/parallel/CpCmd 10.47
89 TestFunctional/parallel/MySQL 117.31
90 TestFunctional/parallel/FileSync 5.34
91 TestFunctional/parallel/CertSync 35.04
95 TestFunctional/parallel/NodeLabels 0.42
96 TestFunctional/parallel/LoadImage 17.6
97 TestFunctional/parallel/SaveImage 20.52
98 TestFunctional/parallel/RemoveImage 26.17
100 TestFunctional/parallel/SaveImageToFile 21.29
101 TestFunctional/parallel/BuildImage 19.21
102 TestFunctional/parallel/ListImages 4.43
103 TestFunctional/parallel/NonActiveRuntimeDisabled 6.09
105 TestFunctional/parallel/ProfileCmd/profile_not_create 9.75
106 TestFunctional/parallel/ProfileCmd/profile_list 7.04
107 TestFunctional/parallel/ProfileCmd/profile_json_output 6.73
108 TestFunctional/parallel/Version/short 0.62
109 TestFunctional/parallel/Version/components 8.91
110 TestFunctional/parallel/DockerEnv/powershell 23.7
111 TestFunctional/parallel/UpdateContextCmd/no_changes 3.2
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 3.17
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 3.17
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 86.35
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.38
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
124 TestFunctional/delete_busybox_image 1.6
125 TestFunctional/delete_my-image_image 0.73
126 TestFunctional/delete_minikube_cached_images 0.67
130 TestJSONOutput/start/Command 199.37
131 TestJSONOutput/start/Audit 0
133 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/pause/Command 5.62
137 TestJSONOutput/pause/Audit 0
139 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
140 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
142 TestJSONOutput/unpause/Command 5.28
143 TestJSONOutput/unpause/Audit 0
145 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/stop/Command 19.09
149 TestJSONOutput/stop/Audit 0
151 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
153 TestErrorJSONOutput 5.09
155 TestKicCustomNetwork/create_custom_network 210.51
156 TestKicCustomNetwork/use_default_bridge_network 199.63
157 TestKicExistingNetwork 212.07
158 TestMainNoArgs 0.46
161 TestMultiNode/serial/FreshStart2Nodes 375.32
162 TestMultiNode/serial/DeployApp2Nodes 33.39
163 TestMultiNode/serial/PingHostFrom2Pods 11.95
164 TestMultiNode/serial/AddNode 158.87
165 TestMultiNode/serial/ProfileList 4.88
166 TestMultiNode/serial/CopyFile 36.97
167 TestMultiNode/serial/StopNode 23.22
168 TestMultiNode/serial/StartAfterStop 121.1
169 TestMultiNode/serial/RestartKeepsNodes 270.57
170 TestMultiNode/serial/DeleteNode 33.89
171 TestMultiNode/serial/StopMultiNode 39.38
172 TestMultiNode/serial/RestartMultiNode 163.21
173 TestMultiNode/serial/ValidateNameConflict 255.04
178 TestDebPackageInstall/install_amd64_debian_sid/minikube 0
179 TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver 0
181 TestDebPackageInstall/install_amd64_debian_latest/minikube 0
182 TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver 0
184 TestDebPackageInstall/install_amd64_debian_10/minikube 0
185 TestDebPackageInstall/install_amd64_debian_10/kvm2-driver 0
187 TestDebPackageInstall/install_amd64_debian_9/minikube 0
188 TestDebPackageInstall/install_amd64_debian_9/kvm2-driver 0
190 TestDebPackageInstall/install_amd64_ubuntu_latest/minikube 0
191 TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver 0
193 TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube 0
194 TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver 0
196 TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube 0
197 TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver 0
199 TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube 0
200 TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver 0
201 TestPreload 421.2
204 TestSkaffold 304.55
206 TestInsufficientStorage 53.14
207 TestRunningBinaryUpgrade 981.14
209 TestKubernetesUpgrade 1187.26
210 TestMissingContainerUpgrade 756.23
212 TestPause/serial/Start 531.71
213 TestStoppedBinaryUpgrade/Upgrade 981.11
214 TestPause/serial/SecondStartNoReconfiguration 93.45
215 TestPause/serial/Pause 9.97
217 TestPause/serial/Unpause 9.8
218 TestPause/serial/PauseAgain 12.84
233 TestStoppedBinaryUpgrade/MinikubeLogs 16.4
240 TestStartStop/group/old-k8s-version/serial/FirstStart 907.27
242 TestStartStop/group/no-preload/serial/FirstStart 418.81
244 TestStartStop/group/embed-certs/serial/FirstStart 496.79
245 TestStartStop/group/no-preload/serial/DeployApp 33.21
246 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 15.56
247 TestStartStop/group/no-preload/serial/Stop 34.01
248 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 6.17
249 TestStartStop/group/no-preload/serial/SecondStart 883.31
250 TestStartStop/group/embed-certs/serial/DeployApp 60.38
251 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 15.61
252 TestStartStop/group/embed-certs/serial/Stop 36.53
253 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 6.3
254 TestStartStop/group/embed-certs/serial/SecondStart 976.56
255 TestStartStop/group/old-k8s-version/serial/DeployApp 29.36
256 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 13.09
257 TestStartStop/group/old-k8s-version/serial/Stop 31.05
258 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 5.45
261 TestStartStop/group/default-k8s-different-port/serial/FirstStart 535.03
262 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 113.2
263 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.52
264 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 7.78
265 TestStartStop/group/no-preload/serial/Pause 56.28
267 TestStartStop/group/newest-cni/serial/FirstStart 342.2
268 TestStartStop/group/default-k8s-different-port/serial/DeployApp 80.31
269 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.45
270 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 38.61
271 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 7.9
272 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 16.21
273 TestStartStop/group/embed-certs/serial/Pause 80.33
274 TestStartStop/group/default-k8s-different-port/serial/Stop 40.37
275 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 6.23
276 TestStartStop/group/default-k8s-different-port/serial/SecondStart 513.05
286 TestStartStop/group/newest-cni/serial/DeployApp 0
287 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 7.28
288 TestStartStop/group/newest-cni/serial/Stop 20.17
289 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 4.98
290 TestStartStop/group/newest-cni/serial/SecondStart 96.31
291 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
292 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
293 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 6.11
294 TestStartStop/group/newest-cni/serial/Pause 36.11
295 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 8.16
296 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.74
297 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 5.45
298 TestStartStop/group/default-k8s-different-port/serial/Pause 35.04
x
+
TestDownloadOnly/v1.14.0/json-events (15.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915182912-22848 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915182912-22848 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker: (15.5408286s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (15.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
--- PASS: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20210915182912-22848
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20210915182912-22848: exit status 85 (753.9997ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 18:29:13
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 18:29:13.865710   26772 out.go:298] Setting OutFile to fd 636 ...
	I0915 18:29:13.867721   26772 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 18:29:13.867721   26772 out.go:311] Setting ErrFile to fd 640...
	I0915 18:29:13.867721   26772 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0915 18:29:13.916456   26772 root.go:291] Error reading config file at C:\Users\jenkins\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0915 18:29:13.941771   26772 out.go:305] Setting JSON to true
	I0915 18:29:13.949676   26772 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9150027,"bootTime":1622580526,"procs":152,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 18:29:13.949676   26772 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 18:29:13.957102   26772 notify.go:169] Checking for updates...
	I0915 18:29:13.966404   26772 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 18:29:15.863412   26772 docker.go:132] docker version: linux-20.10.5
	I0915 18:29:15.874874   26772 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 18:29:16.775362   26772 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 18:29:16.3591046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 18:29:16.778840   26772 start.go:278] selected driver: docker
	I0915 18:29:16.778840   26772 start.go:751] validating driver "docker" against <nil>
	I0915 18:29:16.811597   26772 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 18:29:17.678767   26772 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 18:29:17.2941508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 18:29:17.679183   26772 start_flags.go:264] no existing cluster config was found, will generate one from the flags 
	I0915 18:29:17.842223   26772 start_flags.go:345] Using suggested 15300MB memory alloc based on sys=61438MB, container=20001MB
	I0915 18:29:17.842626   26772 start_flags.go:719] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 18:29:17.842626   26772 cni.go:93] Creating CNI manager for ""
	I0915 18:29:17.842626   26772 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 18:29:17.842626   26772 start_flags.go:278] config:
	{Name:download-only-20210915182912-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210915182912-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 18:29:17.846009   26772 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 18:29:17.848418   26772 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 18:29:17.848418   26772 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 18:29:17.869976   26772 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0915 18:29:17.869976   26772 cache.go:57] Caching tarball of preloaded images
	I0915 18:29:17.870967   26772 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0915 18:29:17.873785   26772 preload.go:237] getting checksum for preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 18:29:17.901196   26772 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4?checksum=md5:f9e1bc5997daac3e4aca6f6bb5ce5b14 -> C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0915 18:29:18.508427   26772 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 18:29:18.508427   26772 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 18:29:18.508427   26772 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 18:29:18.508427   26772 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory
	I0915 18:29:18.509409   26772 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 18:29:22.577377   26772 preload.go:247] saving checksum for preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 18:29:22.580110   26772 preload.go:254] verifying checksumm of C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210915182912-22848"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/json-events (12.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/json-events
aaa_download_only_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915182912-22848 --force --alsologtostderr --kubernetes-version=v1.22.1 --container-runtime=docker --driver=docker
aaa_download_only_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915182912-22848 --force --alsologtostderr --kubernetes-version=v1.22.1 --container-runtime=docker --driver=docker: (12.7209569s)
--- PASS: TestDownloadOnly/v1.22.1/json-events (12.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/preload-exists
--- PASS: TestDownloadOnly/v1.22.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/kubectl
--- PASS: TestDownloadOnly/v1.22.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/LogsDuration (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20210915182912-22848
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20210915182912-22848: exit status 85 (671.1171ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 18:29:28
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 18:29:28.748211   84412 out.go:298] Setting OutFile to fd 580 ...
	I0915 18:29:28.750472   84412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 18:29:28.750472   84412 out.go:311] Setting ErrFile to fd 640...
	I0915 18:29:28.750472   84412 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0915 18:29:28.765211   84412 root.go:291] Error reading config file at C:\Users\jenkins\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0915 18:29:28.766212   84412 out.go:305] Setting JSON to true
	I0915 18:29:28.768855   84412 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9150042,"bootTime":1622580526,"procs":152,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 18:29:28.769721   84412 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 18:29:28.774558   84412 notify.go:169] Checking for updates...
	I0915 18:29:28.777971   84412 config.go:177] Loaded profile config "download-only-20210915182912-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	W0915 18:29:28.778850   84412 start.go:659] api.Load failed for download-only-20210915182912-22848: filestore "download-only-20210915182912-22848": Docker machine "download-only-20210915182912-22848" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0915 18:29:28.778850   84412 driver.go:343] Setting default libvirt URI to qemu:///system
	W0915 18:29:28.779268   84412 start.go:659] api.Load failed for download-only-20210915182912-22848: filestore "download-only-20210915182912-22848": Docker machine "download-only-20210915182912-22848" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0915 18:29:30.622115   84412 docker.go:132] docker version: linux-20.10.5
	I0915 18:29:30.638143   84412 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 18:29:31.501150   84412 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 18:29:31.1287081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 18:29:31.504062   84412 start.go:278] selected driver: docker
	I0915 18:29:31.504062   84412 start.go:751] validating driver "docker" against &{Name:download-only-20210915182912-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210915182912-22848 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 18:29:31.534891   84412 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 18:29:32.375560   84412 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 18:29:32.0092952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 18:29:32.442116   84412 cni.go:93] Creating CNI manager for ""
	I0915 18:29:32.442509   84412 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 18:29:32.442509   84412 start_flags.go:278] config:
	{Name:download-only-20210915182912-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:download-only-20210915182912-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 18:29:32.445892   84412 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 18:29:32.448377   84412 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 18:29:32.448765   84412 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 18:29:32.473867   84412 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
	I0915 18:29:32.474393   84412 cache.go:57] Caching tarball of preloaded images
	I0915 18:29:32.477071   84412 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime docker
	I0915 18:29:32.479650   84412 preload.go:237] getting checksum for preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 ...
	I0915 18:29:32.504579   84412 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4?checksum=md5:df04359146fc74639fed093942461742 -> C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4
	I0915 18:29:33.211015   84412 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 18:29:33.211015   84412 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 18:29:33.211015   84412 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 18:29:33.211015   84412 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory
	I0915 18:29:33.211015   84412 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory, skipping pull
	I0915 18:29:33.212040   84412 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in cache, skipping pull
	I0915 18:29:33.212040   84412 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 as a tarball
	I0915 18:29:36.874989   84412 preload.go:247] saving checksum for preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 ...
	I0915 18:29:36.876988   84412 preload.go:254] verifying checksumm of C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.1-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210915182912-22848"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.1/LogsDuration (0.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/json-events (12.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/json-events
aaa_download_only_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915182912-22848 --force --alsologtostderr --kubernetes-version=v1.22.2-rc.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210915182912-22848 --force --alsologtostderr --kubernetes-version=v1.22.2-rc.0 --container-runtime=docker --driver=docker: (12.7286605s)
--- PASS: TestDownloadOnly/v1.22.2-rc.0/json-events (12.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.2-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.22.2-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/LogsDuration (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20210915182912-22848
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20210915182912-22848: exit status 85 (660.2557ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/09/15 18:29:42
	Running on machine: windows-server-1
	Binary: Built with gc go1.17 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 18:29:42.131061   75740 out.go:298] Setting OutFile to fd 748 ...
	I0915 18:29:42.132427   75740 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 18:29:42.132427   75740 out.go:311] Setting ErrFile to fd 752...
	I0915 18:29:42.132427   75740 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0915 18:29:42.146179   75740 root.go:291] Error reading config file at C:\Users\jenkins\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0915 18:29:42.147175   75740 out.go:305] Setting JSON to true
	I0915 18:29:42.150402   75740 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9150055,"bootTime":1622580527,"procs":152,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 18:29:42.151223   75740 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 18:29:42.155130   75740 notify.go:169] Checking for updates...
	I0915 18:29:42.160012   75740 config.go:177] Loaded profile config "download-only-20210915182912-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	W0915 18:29:42.160387   75740 start.go:659] api.Load failed for download-only-20210915182912-22848: filestore "download-only-20210915182912-22848": Docker machine "download-only-20210915182912-22848" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0915 18:29:42.160808   75740 driver.go:343] Setting default libvirt URI to qemu:///system
	W0915 18:29:42.160808   75740 start.go:659] api.Load failed for download-only-20210915182912-22848: filestore "download-only-20210915182912-22848": Docker machine "download-only-20210915182912-22848" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0915 18:29:43.988959   75740 docker.go:132] docker version: linux-20.10.5
	I0915 18:29:44.003076   75740 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 18:29:44.865593   75740 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 18:29:44.4988531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 18:29:44.868788   75740 start.go:278] selected driver: docker
	I0915 18:29:44.868788   75740 start.go:751] validating driver "docker" against &{Name:download-only-20210915182912-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:download-only-20210915182912-22848 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 18:29:44.901493   75740 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 18:29:45.815915   75740 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:47 SystemTime:2021-09-15 18:29:45.4228517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 18:29:45.884778   75740 cni.go:93] Creating CNI manager for ""
	I0915 18:29:45.884778   75740 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0915 18:29:45.884778   75740 start_flags.go:278] config:
	{Name:download-only-20210915182912-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:15300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2-rc.0 ClusterName:download-only-20210915182912-22848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 18:29:45.887777   75740 cache.go:118] Beginning downloading kic base image for docker with docker
	I0915 18:29:45.889781   75740 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 18:29:45.889781   75740 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local docker daemon
	I0915 18:29:45.921952   75740 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4
	I0915 18:29:45.921952   75740 cache.go:57] Caching tarball of preloaded images
	I0915 18:29:45.922564   75740 preload.go:131] Checking if preload exists for k8s version v1.22.2-rc.0 and runtime docker
	I0915 18:29:45.925751   75740 preload.go:237] getting checksum for preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 18:29:45.953260   75740 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:55e401cc9516bdfbac04c93d8ed559d4 -> C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4
	I0915 18:29:46.682383   75740 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 to local cache
	I0915 18:29:46.682383   75740 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 18:29:46.682383   75740 localpath.go:146] windows sanitize: C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar -> C:\Users\jenkins\minikube-integration\.minikube\cache\kic\kicbase-builds_v0.0.26-1631295795-12425@sha256_7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56.tar
	I0915 18:29:46.682383   75740 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory
	I0915 18:29:46.683371   75740 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 in local cache directory, skipping pull
	I0915 18:29:46.683371   75740 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 exists in cache, skipping pull
	I0915 18:29:46.683371   75740 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 as a tarball
	I0915 18:29:50.469788   75740 preload.go:247] saving checksum for preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 18:29:50.470771   75740 preload.go:254] verifying checksumm of C:\Users\jenkins\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v12-v1.22.2-rc.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210915182912-22848"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.2-rc.0/LogsDuration (0.66s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (7.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (7.1581037s)
--- PASS: TestDownloadOnly/DeleteAll (7.16s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (4.65s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20210915182912-22848
aaa_download_only_test.go:202: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20210915182912-22848: (4.6498128s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (4.65s)

                                                
                                    
x
+
TestDownloadOnlyKic (44.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20210915183011-22848 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20210915183011-22848 --force --alsologtostderr --driver=docker: (36.3892906s)
helpers_test.go:176: Cleaning up "download-docker-20210915183011-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20210915183011-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20210915183011-22848: (5.5746967s)
--- PASS: TestDownloadOnlyKic (44.59s)

                                                
                                    
x
+
TestOffline (685.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20210915200708-22848 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-20210915200708-22848 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (10m59.5592545s)
helpers_test.go:176: Cleaning up "offline-docker-20210915200708-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20210915200708-22848

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20210915200708-22848: (25.8017932s)
--- PASS: TestOffline (685.36s)

                                                
                                    
x
+
TestAddons/Setup (755.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20210915183056-22848 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker --addons=ingress --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-20210915183056-22848 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker --addons=ingress --addons=helm-tiller: (11m28.9840015s)
addons_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons enable gcp-auth
addons_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons enable gcp-auth: (19.1997006s)
addons_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons enable gcp-auth --force
addons_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons enable gcp-auth --force: (47.4894042s)
--- PASS: TestAddons/Setup (755.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (67.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:170: (dbg) Run:  kubectl --context addons-20210915183056-22848 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-20210915183056-22848 replace --force -f testdata\nginx-ingv1.yaml
addons_test.go:177: (dbg) Done: kubectl --context addons-20210915183056-22848 replace --force -f testdata\nginx-ingv1.yaml: (3.2743467s)
addons_test.go:190: (dbg) Run:  kubectl --context addons-20210915183056-22848 replace --force -f testdata\nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:190: (dbg) Done: kubectl --context addons-20210915183056-22848 replace --force -f testdata\nginx-pod-svc.yaml: (2.5182978s)
addons_test.go:195: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [3ceb5410-c286-4d94-b154-4b3b5ce5bb72] Pending
helpers_test.go:343: "nginx" [3ceb5410-c286-4d94-b154-4b3b5ce5bb72] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [3ceb5410-c286-4d94-b154-4b3b5ce5bb72] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:195: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 53.1421936s
addons_test.go:215: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915183056-22848 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:215: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915183056-22848 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.7545837s)
addons_test.go:222: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-20210915183056-22848 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
! Executing "docker container inspect addons-20210915183056-22848 --format={{.State.Status}}" took an unusually long time: 2.3969513s
* Restarting the docker service may improve performance.
addons_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons disable ingress --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Ingress (67.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (14.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:330: metrics-server stabilized in 114.8931ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:332: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-77c99ccb96-p2wsh" [f129c96b-e59b-4b2f-9d98-660f431e22ef] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:332: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.1038763s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:338: (dbg) Run:  kubectl --context addons-20210915183056-22848 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons disable metrics-server --alsologtostderr -v=1: (8.6148914s)
--- PASS: TestAddons/parallel/MetricsServer (14.45s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (50.48s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:379: tiller-deploy stabilized in 121.8818ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:381: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-7d9fb5c894-h8jvw" [2f8ae13f-dc2c-4c89-bbda-8b2b41c93c07] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:381: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0954899s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:396: (dbg) Run:  kubectl --context addons-20210915183056-22848 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:396: (dbg) Done: kubectl --context addons-20210915183056-22848 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (37.4743117s)
addons_test.go:401: kubectl --context addons-20210915183056-22848 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:413: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons disable helm-tiller --alsologtostderr -v=1: (7.5271636s)
--- PASS: TestAddons/parallel/HelmTiller (50.48s)

                                                
                                    
x
+
TestAddons/parallel/Olm (375.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:425: (dbg) Run:  kubectl --context addons-20210915183056-22848 wait --for=condition=ready --namespace=olm pod --selector=app=catalog-operator --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:428: catalog-operator stabilized in 837.4382ms
addons_test.go:430: (dbg) Run:  kubectl --context addons-20210915183056-22848 wait --for=condition=ready --namespace=olm pod --selector=app=olm-operator --timeout=90s
addons_test.go:433: olm-operator stabilized in 1.1948611s
addons_test.go:435: (dbg) Run:  kubectl --context addons-20210915183056-22848 wait --for=condition=ready --namespace=olm pod --selector=app=packageserver --timeout=90s
addons_test.go:438: packageserver stabilized in 1.6011516s
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210915183056-22848 wait --for=condition=ready --namespace=olm pod --selector=olm.catalogSource=operatorhubio-catalog --timeout=90s
addons_test.go:443: operatorhubio-catalog stabilized in 1.9394176s
addons_test.go:446: (dbg) Run:  kubectl --context addons-20210915183056-22848 create -f testdata\etcd.yaml
addons_test.go:446: (dbg) Done: kubectl --context addons-20210915183056-22848 create -f testdata\etcd.yaml: (1.0149756s)
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915183056-22848 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915183056-22848 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:458: kubectl --context addons-20210915183056-22848 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915183056-22848 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915183056-22848 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:458: kubectl --context addons-20210915183056-22848 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915183056-22848 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd
addons_test.go:458: kubectl --context addons-20210915183056-22848 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20210915183056-22848 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (375.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (221.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:484: csi-hostpath-driver pods stabilized in 58.5064ms
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210915183056-22848 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:487: (dbg) Done: kubectl --context addons-20210915183056-22848 create -f testdata\csi-hostpath-driver\pvc.yaml: (1.0085707s)
addons_test.go:492: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915183056-22848 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915183056-22848 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:497: (dbg) Run:  kubectl --context addons-20210915183056-22848 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:502: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [84192a12-d475-422b-9b19-95117c8b6e5d] Pending
helpers_test.go:343: "task-pv-pod" [84192a12-d475-422b-9b19-95117c8b6e5d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [84192a12-d475-422b-9b19-95117c8b6e5d] Running
addons_test.go:502: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 1m8.0987233s
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210915183056-22848 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:512: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210915183056-22848 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:426: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210915183056-22848 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:517: (dbg) Run:  kubectl --context addons-20210915183056-22848 delete pod task-pv-pod
addons_test.go:517: (dbg) Done: kubectl --context addons-20210915183056-22848 delete pod task-pv-pod: (4.3890043s)
addons_test.go:523: (dbg) Run:  kubectl --context addons-20210915183056-22848 delete pvc hpvc
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210915183056-22848 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:529: (dbg) Done: kubectl --context addons-20210915183056-22848 create -f testdata\csi-hostpath-driver\pvc-restore.yaml: (1.0792653s)
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915183056-22848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210915183056-22848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210915183056-22848 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:539: (dbg) Done: kubectl --context addons-20210915183056-22848 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml: (1.0936251s)
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [03b84d8a-bffa-4f9d-8b9a-48ae4c00073d] Pending
helpers_test.go:343: "task-pv-pod-restore" [03b84d8a-bffa-4f9d-8b9a-48ae4c00073d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [03b84d8a-bffa-4f9d-8b9a-48ae4c00073d] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 1m53.1051231s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210915183056-22848 delete pod task-pv-pod-restore
addons_test.go:549: (dbg) Done: kubectl --context addons-20210915183056-22848 delete pod task-pv-pod-restore: (10.4509637s)
addons_test.go:553: (dbg) Run:  kubectl --context addons-20210915183056-22848 delete pvc hpvc-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-20210915183056-22848 delete volumesnapshot new-snapshot-demo
addons_test.go:557: (dbg) Done: kubectl --context addons-20210915183056-22848 delete volumesnapshot new-snapshot-demo: (1.5408766s)
addons_test.go:561: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:565: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons disable volumesnapshots --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:565: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons disable volumesnapshots --alsologtostderr -v=1: (11.9612999s)
--- PASS: TestAddons/parallel/CSI (221.72s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (300.13s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:576: (dbg) Run:  kubectl --context addons-20210915183056-22848 create -f testdata\busybox.yaml
addons_test.go:576: (dbg) Done: kubectl --context addons-20210915183056-22848 create -f testdata\busybox.yaml: (1.1596044s)
addons_test.go:582: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [32a974a6-2af9-4607-bdc3-22a790cbf28b] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [32a974a6-2af9-4607-bdc3-22a790cbf28b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [32a974a6-2af9-4607-bdc3-22a790cbf28b] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:582: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 36.0937074s
addons_test.go:588: (dbg) Run:  kubectl --context addons-20210915183056-22848 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:588: (dbg) Done: kubectl --context addons-20210915183056-22848 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS": (1.6953821s)
addons_test.go:625: (dbg) Run:  kubectl --context addons-20210915183056-22848 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:625: (dbg) Done: kubectl --context addons-20210915183056-22848 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT": (1.5389028s)
addons_test.go:641: (dbg) Run:  kubectl --context addons-20210915183056-22848 apply -f testdata\private-image.yaml
addons_test.go:641: (dbg) Done: kubectl --context addons-20210915183056-22848 apply -f testdata\private-image.yaml: (1.2116551s)
addons_test.go:648: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7ff9c8c74f-7lbr5" [548e2153-e79f-4c8f-b9d2-799e23896fa7] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-7lbr5" [548e2153-e79f-4c8f-b9d2-799e23896fa7] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:648: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 2m35.0531869s
addons_test.go:654: (dbg) Run:  kubectl --context addons-20210915183056-22848 apply -f testdata\private-image-eu.yaml
addons_test.go:654: (dbg) Done: kubectl --context addons-20210915183056-22848 apply -f testdata\private-image-eu.yaml: (1.430911s)
addons_test.go:661: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-5956d58f9f-c4fjh" [23661238-0f89-4263-bc6b-e8de1724e1b2] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-c4fjh" [23661238-0f89-4263-bc6b-e8de1724e1b2] Running
addons_test.go:661: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 1m35.0881877s
addons_test.go:667: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:667: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210915183056-22848 addons disable gcp-auth --alsologtostderr -v=1: (6.6558267s)
--- PASS: TestAddons/parallel/GCPAuth (300.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (31.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-20210915183056-22848
addons_test.go:140: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-20210915183056-22848: (26.2916056s)
addons_test.go:144: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-20210915183056-22848
addons_test.go:144: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-20210915183056-22848: (2.6418193s)
addons_test.go:148: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-20210915183056-22848
addons_test.go:148: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-20210915183056-22848: (2.3721026s)
--- PASS: TestAddons/StoppedEnableDisable (31.31s)

                                                
                                    
x
+
TestDockerFlags (578.62s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20210915202413-22848 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-20210915202413-22848 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (8m48.966041s)
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20210915202413-22848 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20210915202413-22848 ssh "sudo systemctl show docker --property=Environment --no-pager": (6.1725105s)
docker_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20210915202413-22848 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:62: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20210915202413-22848 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (6.4325206s)
helpers_test.go:176: Cleaning up "docker-flags-20210915202413-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20210915202413-22848
E0915 20:33:15.351503   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20210915202413-22848: (37.0421485s)
--- PASS: TestDockerFlags (578.62s)

                                                
                                    
x
+
TestForceSystemdFlag (388.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20210915201833-22848 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-20210915201833-22848 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (5m46.951858s)
docker_test.go:103: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20210915201833-22848 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:103: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20210915201833-22848 ssh "docker info --format {{.CgroupDriver}}": (9.2552392s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210915201833-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20210915201833-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20210915201833-22848: (32.2736045s)
--- PASS: TestForceSystemdFlag (388.48s)

                                                
                                    
x
+
TestForceSystemdEnv (577.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20210915202338-22848 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-20210915202338-22848 --memory=2048 --alsologtostderr -v=5 --driver=docker: (8m49.8042543s)
docker_test.go:103: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20210915202338-22848 ssh "docker info --format {{.CgroupDriver}}"
E0915 20:32:35.663006   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:103: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-20210915202338-22848 ssh "docker info --format {{.CgroupDriver}}": (14.1262024s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210915202338-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20210915202338-22848

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20210915202338-22848: (33.5661933s)
--- PASS: TestForceSystemdEnv (577.50s)

                                                
                                    
x
+
TestErrorSpam/setup (189.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20210915185036-22848 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 --driver=docker
E0915 18:53:32.291893   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:32.316203   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:32.326328   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:32.346790   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:32.387767   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:32.468722   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:32.628841   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:32.949761   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:33.591073   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:34.873520   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:37.435133   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:53:42.556474   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
error_spam_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20210915185036-22848 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 --driver=docker: (3m9.5126774s)
error_spam_test.go:89: acceptable stderr: "! C:\\Program Files\\Docker\\Docker\\resources\\bin\\kubectl.exe is version 1.20.0, which may have incompatibilites with Kubernetes 1.22.1."
--- PASS: TestErrorSpam/setup (189.51s)

                                                
                                    
x
+
TestErrorSpam/start (13.99s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 start --dry-run
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 start --dry-run: (4.8959901s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 start --dry-run
E0915 18:53:52.800044   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 start --dry-run: (4.6244671s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 start --dry-run
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 start --dry-run: (4.4640956s)
--- PASS: TestErrorSpam/start (13.99s)

                                                
                                    
x
+
TestErrorSpam/status (15.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 status
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 status: (5.1412881s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 status
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 status: (4.9613191s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 status
E0915 18:54:13.281770   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 status: (4.9607546s)
--- PASS: TestErrorSpam/status (15.07s)

                                                
                                    
x
+
TestErrorSpam/pause (14.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 pause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 pause: (5.3434836s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 pause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 pause: (4.3846641s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 pause
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 pause: (4.3883279s)
--- PASS: TestErrorSpam/pause (14.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (15.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 unpause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 unpause: (5.3700803s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 unpause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 unpause: (5.0917684s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 unpause
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 unpause: (4.8702513s)
--- PASS: TestErrorSpam/unpause (15.34s)

                                                
                                    
x
+
TestErrorSpam/stop (30.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 stop
E0915 18:54:54.243205   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 stop: (19.0795361s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 stop
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 stop: (5.5084515s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 stop
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210915185036-22848 --log_dir C:\Users\jenkins\AppData\Local\Temp\nospam-20210915185036-22848 stop: (5.4475765s)
--- PASS: TestErrorSpam/stop (30.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1726: local sync path: C:\Users\jenkins\minikube-integration\.minikube\files\etc\test\nested\copy\22848\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.08s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (198.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2102: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0915 18:56:16.164075   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 18:58:32.288408   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
functional_test.go:2102: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (3m18.5425616s)
--- PASS: TestFunctional/serial/StartWithProxy (198.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:747: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --alsologtostderr -v=8
E0915 18:59:00.006957   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
functional_test.go:747: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --alsologtostderr -v=8: (28.1996554s)
functional_test.go:751: soft start took 28.2076239s for "functional-20210915185528-22848" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:767: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.17s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:780: (dbg) Run:  kubectl --context functional-20210915185528-22848 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (17.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache add k8s.gcr.io/pause:3.1
functional_test.go:1102: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache add k8s.gcr.io/pause:3.1: (5.8690242s)
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache add k8s.gcr.io/pause:3.3
functional_test.go:1102: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache add k8s.gcr.io/pause:3.3: (5.8940086s)
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache add k8s.gcr.io/pause:latest
functional_test.go:1102: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache add k8s.gcr.io/pause:latest: (5.5858516s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (17.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (8.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1132: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210915185528-22848 C:\Users\jenkins\AppData\Local\Temp\functional-20210915185528-228483635481118
functional_test.go:1132: (dbg) Done: docker build -t minikube-local-cache-test:functional-20210915185528-22848 C:\Users\jenkins\AppData\Local\Temp\functional-20210915185528-228483635481118: (2.2991176s)
functional_test.go:1144: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache add minikube-local-cache-test:functional-20210915185528-22848
functional_test.go:1144: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache add minikube-local-cache-test:functional-20210915185528-22848: (5.2983877s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache delete minikube-local-cache-test:functional-20210915185528-22848
functional_test.go:1138: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210915185528-22848
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (8.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1156: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (4.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1176: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh sudo crictl images
functional_test.go:1176: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh sudo crictl images: (4.7297659s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (4.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (19.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1198: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1198: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh sudo docker rmi k8s.gcr.io/pause:latest: (4.7917543s)
functional_test.go:1204: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1204: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (4.7502423s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1209: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache reload
functional_test.go:1209: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cache reload: (5.3940797s)
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1214: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (4.6712517s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (19.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1223: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1223: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.94s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:798: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 kubectl -- --context functional-20210915185528-22848 get pods
functional_test.go:798: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 kubectl -- --context functional-20210915185528-22848 get pods: (2.5948725s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.59s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:821: (dbg) Run:  out\kubectl.exe --context functional-20210915185528-22848 get pods
functional_test.go:821: (dbg) Done: out\kubectl.exe --context functional-20210915185528-22848 get pods: (2.0470021s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.07s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (121.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:835: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:835: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m1.0220623s)
functional_test.go:839: restart took 2m1.022688s for "functional-20210915185528-22848" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (121.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:886: (dbg) Run:  kubectl --context functional-20210915185528-22848 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:900: etcd phase: Running
functional_test.go:910: etcd status: Ready
functional_test.go:900: kube-apiserver phase: Running
functional_test.go:910: kube-apiserver status: Ready
functional_test.go:900: kube-controller-manager phase: Running
functional_test.go:910: kube-controller-manager status: Ready
functional_test.go:900: kube-scheduler phase: Running
functional_test.go:910: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 logs
functional_test.go:1285: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 logs: (8.5938122s)
--- PASS: TestFunctional/serial/LogsCmd (8.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (8.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1301: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 logs --file C:\Users\jenkins\AppData\Local\Temp\functional-20210915185528-22848182613846\logs.txt
functional_test.go:1301: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 logs --file C:\Users\jenkins\AppData\Local\Temp\functional-20210915185528-22848182613846\logs.txt: (8.4952752s)
--- PASS: TestFunctional/serial/LogsFileCmd (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 config get cpus
functional_test.go:1249: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 config get cpus: exit status 14 (463.0531ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 config set cpus 2
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 config get cpus
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 config unset cpus
functional_test.go:1249: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1249: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 config get cpus: exit status 14 (504.4976ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1076: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1076: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20210915185528-22848 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (4.055859s)

                                                
                                                
-- stdout --
	* [functional-20210915185528-22848] minikube v1.23.0 sur Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 19:03:54.345482   81064 out.go:298] Setting OutFile to fd 852 ...
	I0915 19:03:54.347487   81064 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:03:54.347487   81064 out.go:311] Setting ErrFile to fd 2060...
	I0915 19:03:54.347487   81064 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:03:54.374458   81064 out.go:305] Setting JSON to false
	I0915 19:03:54.380922   81064 start.go:111] hostinfo: {"hostname":"windows-server-1","uptime":9152107,"bootTime":1622580527,"procs":158,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0915 19:03:54.381348   81064 start.go:119] gopshost.Virtualization returned error: not implemented yet
	I0915 19:03:54.383475   81064 out.go:177] * [functional-20210915185528-22848] minikube v1.23.0 sur Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0915 19:03:54.385466   81064 out.go:177]   - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	I0915 19:03:54.387463   81064 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	I0915 19:03:54.390475   81064 out.go:177]   - MINIKUBE_LOCATION=12425
	I0915 19:03:54.391555   81064 config.go:177] Loaded profile config "functional-20210915185528-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:03:54.392523   81064 driver.go:343] Setting default libvirt URI to qemu:///system
	I0915 19:03:56.546364   81064 docker.go:132] docker version: linux-20.10.5
	I0915 19:03:56.568002   81064 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0915 19:03:57.813875   81064 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2458805s)
	I0915 19:03:57.814888   81064 info.go:263] docker info: {ID:AZM6:4F7P:D7J3:PGKE:EIYN:3OQU:SEA3:BB2T:P6VC:GKKH:UKSA:R2VX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2021-09-15 19:03:57.2748606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0915 19:03:57.817876   81064 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0915 19:03:57.817876   81064 start.go:278] selected driver: docker
	I0915 19:03:57.817876   81064 start.go:751] validating driver "docker" against &{Name:functional-20210915185528-22848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:functional-20210915185528-22848 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-prov
isioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0915 19:03:57.817876   81064 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0915 19:03:57.932807   81064 out.go:177] 
	W0915 19:03:57.933820   81064 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 19:03:57.940826   81064 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1585: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1585: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 addons list: (2.8363213s)
functional_test.go:1596: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (107.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [0079c81b-04ae-439f-be17-1a1ba8697238] Running
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0785638s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20210915185528-22848 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20210915185528-22848 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20210915185528-22848 get pvc myclaim -o=json
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20210915185528-22848 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20210915185528-22848 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [c56334bc-7c67-4ae4-8ca1-009536dcd9c9] Pending
helpers_test.go:343: "sp-pod" [c56334bc-7c67-4ae4-8ca1-009536dcd9c9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [c56334bc-7c67-4ae4-8ca1-009536dcd9c9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m23.0832614s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20210915185528-22848 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20210915185528-22848 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20210915185528-22848 delete -f testdata/storage-provisioner/pod.yaml: (1.5681631s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20210915185528-22848 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [1ea99f9e-98e5-467c-890c-1c47bcff3574] Pending
helpers_test.go:343: "sp-pod" [1ea99f9e-98e5-467c-890c-1c47bcff3574] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [1ea99f9e-98e5-467c-890c-1c47bcff3574] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0529456s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20210915185528-22848 exec sp-pod -- ls /tmp/mount
E0915 19:08:32.284616   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (107.11s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (11.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1618: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1618: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "echo hello": (5.8528963s)
functional_test.go:1635: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "cat /etc/hostname"
functional_test.go:1635: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "cat /etc/hostname": (5.8959797s)
--- PASS: TestFunctional/parallel/SSHCmd (11.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (10.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 cp testdata\cp-test.txt /home/docker/cp-test.txt: (4.857282s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /home/docker/cp-test.txt": (5.6109962s)
--- PASS: TestFunctional/parallel/CpCmd (10.47s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (117.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1666: (dbg) Run:  kubectl --context functional-20210915185528-22848 replace --force -f testdata\mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1666: (dbg) Done: kubectl --context functional-20210915185528-22848 replace --force -f testdata\mysql.yaml: (1.0219516s)
functional_test.go:1671: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-cw89n" [fe243119-a733-44de-9f68-51f5c3a980d2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-cw89n" [fe243119-a733-44de-9f68-51f5c3a980d2] Running
functional_test.go:1671: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m15.0979504s
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;": exit status 1 (1.2873453s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;": exit status 1 (1.3819856s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;": exit status 1 (1.5244596s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;": exit status 1 (1.3041111s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;": exit status 1 (1.4351168s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;"
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;": exit status 1 (1.7694877s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Non-zero exit: kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;": exit status 1 (986.9303ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1678: (dbg) Run:  kubectl --context functional-20210915185528-22848 exec mysql-9bbbc5bbb-cw89n -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (117.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1798: Checking for existence of /etc/test/nested/copy/22848/hosts within VM
functional_test.go:1799: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /etc/test/nested/copy/22848/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1799: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /etc/test/nested/copy/22848/hosts": (5.3379569s)
functional_test.go:1804: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (35.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1839: Checking for existence of /etc/ssl/certs/22848.pem within VM
functional_test.go:1840: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /etc/ssl/certs/22848.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1840: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /etc/ssl/certs/22848.pem": (5.6357612s)
functional_test.go:1839: Checking for existence of /usr/share/ca-certificates/22848.pem within VM
functional_test.go:1840: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /usr/share/ca-certificates/22848.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1840: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /usr/share/ca-certificates/22848.pem": (6.0402453s)
functional_test.go:1839: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1840: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1840: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /etc/ssl/certs/51391683.0": (6.3097868s)
functional_test.go:1866: Checking for existence of /etc/ssl/certs/228482.pem within VM
functional_test.go:1867: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /etc/ssl/certs/228482.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1867: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /etc/ssl/certs/228482.pem": (5.8656185s)
functional_test.go:1866: Checking for existence of /usr/share/ca-certificates/228482.pem within VM
functional_test.go:1867: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /usr/share/ca-certificates/228482.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1867: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /usr/share/ca-certificates/228482.pem": (5.5101987s)
functional_test.go:1866: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1867: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1867: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (5.6663309s)
--- PASS: TestFunctional/parallel/CertSync (35.04s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-20210915185528-22848 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (17.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:241: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:241: (dbg) Done: docker pull busybox:1.33: (3.6602285s)
functional_test.go:248: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210915185528-22848
functional_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image load --daemon docker.io/library/busybox:load-functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image load --daemon docker.io/library/busybox:load-functional-20210915185528-22848: (6.9318674s)
functional_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210915185528-22848 -- docker image inspect docker.io/library/busybox:load-functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210915185528-22848 -- docker image inspect docker.io/library/busybox:load-functional-20210915185528-22848: (6.08926s)
--- PASS: TestFunctional/parallel/LoadImage (17.60s)

                                                
                                    
x
+
TestFunctional/parallel/SaveImage (20.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SaveImage
=== PAUSE TestFunctional/parallel/SaveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImage
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image pull docker.io/library/busybox:1.29

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImage
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image pull docker.io/library/busybox:1.29: (8.4934582s)
functional_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image tag docker.io/library/busybox:1.29 docker.io/library/busybox:save-functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImage
functional_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image tag docker.io/library/busybox:1.29 docker.io/library/busybox:save-functional-20210915185528-22848: (4.5483067s)
functional_test.go:394: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image save --daemon docker.io/library/busybox:save-functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImage
functional_test.go:394: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image save --daemon docker.io/library/busybox:save-functional-20210915185528-22848: (6.628662s)
functional_test.go:400: (dbg) Run:  docker images busybox
--- PASS: TestFunctional/parallel/SaveImage (20.52s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (26.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:333: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:333: (dbg) Done: docker pull busybox:1.32: (4.168879s)
functional_test.go:340: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210915185528-22848
functional_test.go:346: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image load docker.io/library/busybox:remove-functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:346: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image load docker.io/library/busybox:remove-functional-20210915185528-22848: (9.9863748s)
functional_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image rm docker.io/library/busybox:remove-functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image rm docker.io/library/busybox:remove-functional-20210915185528-22848: (4.2793068s)

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210915185528-22848 -- docker images

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210915185528-22848 -- docker images: (5.9746528s)
--- PASS: TestFunctional/parallel/RemoveImage (26.17s)

                                                
                                    
x
+
TestFunctional/parallel/SaveImageToFile (21.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SaveImageToFile
=== PAUSE TestFunctional/parallel/SaveImageToFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:421: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image pull docker.io/library/busybox:1.30

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:421: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image pull docker.io/library/busybox:1.30: (7.7424183s)
functional_test.go:429: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image tag docker.io/library/busybox:1.30 docker.io/library/busybox:save-to-file-functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:429: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image tag docker.io/library/busybox:1.30 docker.io/library/busybox:save-to-file-functional-20210915185528-22848: (5.1059907s)
functional_test.go:440: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image save docker.io/library/busybox:save-to-file-functional-20210915185528-22848 C:\jenkins\workspace\Docker_Windows_integration\busybox-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:440: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image save docker.io/library/busybox:save-to-file-functional-20210915185528-22848 C:\jenkins\workspace\Docker_Windows_integration\busybox-save.tar: (5.3870318s)
functional_test.go:446: (dbg) Run:  docker load -i C:\jenkins\workspace\Docker_Windows_integration\busybox-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/SaveImageToFile
functional_test.go:446: (dbg) Done: docker load -i C:\jenkins\workspace\Docker_Windows_integration\busybox-save.tar: (2.0926743s)
functional_test.go:453: (dbg) Run:  docker images busybox
--- PASS: TestFunctional/parallel/SaveImageToFile (21.29s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (19.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:504: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image build -t localhost/my-image:functional-20210915185528-22848 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:504: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image build -t localhost/my-image:functional-20210915185528-22848 testdata\build: (13.2435889s)
functional_test.go:509: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image build -t localhost/my-image:functional-20210915185528-22848 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM busybox
latest: Pulling from library/busybox
24fb2886d6f6: Pulling fs layer
24fb2886d6f6: Verifying Checksum
24fb2886d6f6: Download complete
24fb2886d6f6: Pull complete
Digest: sha256:52f73a0a43a16cf37cd0720c90887ce972fe60ee06a687ee71fb93a7ca601df7
Status: Downloaded newer image for busybox:latest
---> 16ea53ea7c65
Step 2/3 : RUN true
---> Running in dc153b479256
Removing intermediate container dc153b479256
---> 75ab64b730bb
Step 3/3 : ADD content.txt /
---> f05999df7706
Successfully built f05999df7706
Successfully tagged localhost/my-image:functional-20210915185528-22848
functional_test.go:512: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image build -t localhost/my-image:functional-20210915185528-22848 testdata\build:
! Executing "docker container inspect functional-20210915185528-22848 --format={{.State.Status}}" took an unusually long time: 2.2338288s
* Restarting the docker service may improve performance.
functional_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210915185528-22848 -- docker image inspect localhost/my-image:functional-20210915185528-22848

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210915185528-22848 -- docker image inspect localhost/my-image:functional-20210915185528-22848: (5.9603872s)
--- PASS: TestFunctional/parallel/BuildImage (19.21s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image ls: (4.4321422s)
functional_test.go:543: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.5
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210915185528-22848
docker.io/library/busybox:remove-functional-20210915185528-22848
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
functional_test.go:546: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 image ls:
! Executing "docker container inspect functional-20210915185528-22848 --format={{.State.Status}}" took an unusually long time: 2.2242395s
* Restarting the docker service may improve performance.
--- PASS: TestFunctional/parallel/ListImages (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1894: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo systemctl is-active crio"
functional_test.go:1894: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 ssh "sudo systemctl is-active crio": exit status 1 (6.0895583s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect functional-20210915185528-22848 --format={{.State.Status}}" took an unusually long time: 2.6535439s
	* Restarting the docker service may improve performance.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (9.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1322: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1322: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (2.9534122s)
functional_test.go:1326: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1326: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (6.7921011s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (9.75s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (7.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1360: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1360: (dbg) Done: out/minikube-windows-amd64.exe profile list: (6.53925s)
functional_test.go:1365: Took "6.5396181s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1379: Took "503.8256ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (7.04s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (6.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1410: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1410: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (6.1191594s)
functional_test.go:1415: Took "6.1191594s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1423: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1428: Took "602.753ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (6.73s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2123: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 version --short
--- PASS: TestFunctional/parallel/Version/short (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 version -o=json --components: (8.9102694s)
--- PASS: TestFunctional/parallel/Version/components (8.91s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (23.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:601: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210915185528-22848 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20210915185528-22848"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:601: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210915185528-22848 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20210915185528-22848": (14.7447414s)
functional_test.go:622: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210915185528-22848 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:622: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210915185528-22848 docker-env | Invoke-Expression ; docker images": (8.9452182s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (23.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1985: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1985: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 update-context --alsologtostderr -v=2: (3.2013551s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1985: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 update-context --alsologtostderr -v=2
functional_test.go:1985: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 update-context --alsologtostderr -v=2: (3.164688s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1985: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1985: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 update-context --alsologtostderr -v=2: (3.1693731s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20210915185528-22848 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (86.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20210915185528-22848 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:147: (dbg) Done: kubectl --context functional-20210915185528-22848 apply -f testdata\testsvc.yaml: (1.1981775s)
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [ceab4480-b717-4d9a-b575-ac0f416080c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [ceab4480-b717-4d9a-b575-ac0f416080c6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 1m25.0702401s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (86.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20210915185528-22848 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20210915185528-22848 tunnel --alsologtostderr] ...

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
helpers_test.go:507: unable to kill pid 47240: DuplicateHandle: The handle is invalid.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (1.6s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:186: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210915185528-22848
functional_test.go:191: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210915185528-22848
--- PASS: TestFunctional/delete_busybox_image (1.60s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.73s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210915185528-22848
--- PASS: TestFunctional/delete_my-image_image (0.73s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.67s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210915185528-22848
--- PASS: TestFunctional/delete_minikube_cached_images (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (199.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20210915190930-22848 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0915 19:09:55.364454   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 19:12:35.694325   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:35.705833   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:35.717304   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:35.738394   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:35.780585   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:35.861389   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:36.021735   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:36.346021   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:36.991030   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:38.273545   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:40.835705   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:12:45.957594   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-20210915190930-22848 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (3m19.3659012s)
--- PASS: TestJSONOutput/start/Command (199.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (5.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20210915190930-22848 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-20210915190930-22848 --output=json --user=testUser: (5.6199765s)
--- PASS: TestJSONOutput/pause/Command (5.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (5.28s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20210915190930-22848 --output=json --user=testUser
E0915 19:12:56.198410   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-20210915190930-22848 --output=json --user=testUser: (5.2789398s)
--- PASS: TestJSONOutput/unpause/Command (5.28s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (19.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20210915190930-22848 --output=json --user=testUser
E0915 19:13:16.680910   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-20210915190930-22848 --output=json --user=testUser: (19.0842735s)
--- PASS: TestJSONOutput/stop/Command (19.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (5.09s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20210915191333-22848 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20210915191333-22848 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (436.7432ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4894c502-4e7c-4bee-8349-e597dcb90f25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20210915191333-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"be308f8e-2342-4bbf-998c-14e0a7334963","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"e46c5804-4cce-4c1d-8c38-f5b85b636145","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"c20d66fa-0fb1-4357-9803-e53cd18ae702","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12425"}}
	{"specversion":"1.0","id":"441e5ae8-c724-4f73-b2c8-a190122ad619","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210915191333-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20210915191333-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20210915191333-22848: (4.6543433s)
--- PASS: TestErrorJSONOutput (5.09s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (210.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20210915191338-22848 --network=
E0915 19:13:57.643503   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:15:19.566788   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20210915191338-22848 --network=: (3m13.0994752s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210915191338-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20210915191338-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20210915191338-22848: (16.7225032s)
--- PASS: TestKicCustomNetwork/create_custom_network (210.51s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (199.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20210915191709-22848 --network=bridge
E0915 19:17:35.690609   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:18:03.407558   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:18:32.280481   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20210915191709-22848 --network=bridge: (3m5.2082945s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210915191709-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20210915191709-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20210915191709-22848: (13.7384633s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (199.63s)

                                                
                                    
x
+
TestKicExistingNetwork (212.07s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20210915192031-22848 --network=existing-network
E0915 19:22:35.692329   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:23:32.278078   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:94: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20210915192031-22848 --network=existing-network: (3m12.6372401s)
helpers_test.go:176: Cleaning up "existing-network-20210915192031-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20210915192031-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20210915192031-22848: (15.7457262s)
--- PASS: TestKicExistingNetwork (212.07s)

                                                
                                    
x
+
TestMainNoArgs (0.46s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (375.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:82: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210915192401-22848 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0915 19:26:35.363567   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 19:27:35.688929   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:28:32.275620   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 19:28:58.767105   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
multinode_test.go:82: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210915192401-22848 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (6m7.9278065s)
multinode_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr
multinode_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr: (7.3913819s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (375.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (33.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:463: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:463: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (2.8758321s)
multinode_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- rollout status deployment/busybox
multinode_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- rollout status deployment/busybox: (6.3794549s)
multinode_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:474: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- get pods -o jsonpath='{.items[*].status.podIP}': (2.097844s)
multinode_test.go:486: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:486: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- get pods -o jsonpath='{.items[*].metadata.name}': (2.0710237s)
multinode_test.go:494: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-8v6x7 -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-8v6x7 -- nslookup kubernetes.io: (5.6354542s)
multinode_test.go:494: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-dqwxc -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-dqwxc -- nslookup kubernetes.io: (4.0839561s)
multinode_test.go:504: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-8v6x7 -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-8v6x7 -- nslookup kubernetes.default: (2.6991707s)
multinode_test.go:504: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-dqwxc -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-dqwxc -- nslookup kubernetes.default: (2.5294243s)
multinode_test.go:512: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-8v6x7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-8v6x7 -- nslookup kubernetes.default.svc.cluster.local: (2.5326854s)
multinode_test.go:512: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-dqwxc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-dqwxc -- nslookup kubernetes.default.svc.cluster.local: (2.4778975s)
--- PASS: TestMultiNode/serial/DeployApp2Nodes (33.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (11.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:522: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:522: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- get pods -o jsonpath='{.items[*].metadata.name}': (2.1003506s)
multinode_test.go:530: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-8v6x7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:530: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-8v6x7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.4783627s)
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-8v6x7 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-8v6x7 -- sh -c "ping -c 1 192.168.65.2": (2.4597004s)
multinode_test.go:530: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-dqwxc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:530: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-dqwxc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.4635748s)
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-dqwxc -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20210915192401-22848 -- exec busybox-84b6686758-dqwxc -- sh -c "ping -c 1 192.168.65.2": (2.4456413s)
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (11.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (158.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20210915192401-22848 -v 3 --alsologtostderr
E0915 19:32:35.685913   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
multinode_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20210915192401-22848 -v 3 --alsologtostderr: (2m28.9893126s)
multinode_test.go:113: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr
E0915 19:33:32.274333   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
multinode_test.go:113: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr: (9.8797459s)
--- PASS: TestMultiNode/serial/AddNode (158.87s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (4.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:129: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:129: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.8806338s)
--- PASS: TestMultiNode/serial/ProfileList (4.88s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (36.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --output json --alsologtostderr
multinode_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --output json --alsologtostderr: (9.7513654s)
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 cp testdata\cp-test.txt /home/docker/cp-test.txt: (4.0383818s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 ssh "sudo cat /home/docker/cp-test.txt": (4.6179219s)
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 cp testdata\cp-test.txt multinode-20210915192401-22848-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 cp testdata\cp-test.txt multinode-20210915192401-22848-m02:/home/docker/cp-test.txt: (4.5638742s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 ssh -n multinode-20210915192401-22848-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 ssh -n multinode-20210915192401-22848-m02 "sudo cat /home/docker/cp-test.txt": (4.6326308s)
helpers_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 cp testdata\cp-test.txt multinode-20210915192401-22848-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 cp testdata\cp-test.txt multinode-20210915192401-22848-m03:/home/docker/cp-test.txt: (4.74955s)
helpers_test.go:549: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 ssh -n multinode-20210915192401-22848-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:549: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 ssh -n multinode-20210915192401-22848-m03 "sudo cat /home/docker/cp-test.txt": (4.6106869s)
--- PASS: TestMultiNode/serial/CopyFile (36.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (23.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:192: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 node stop m03
multinode_test.go:192: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 node stop m03: (7.0059515s)
multinode_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status
multinode_test.go:198: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status: exit status 7 (8.0036224s)

                                                
                                                
-- stdout --
	multinode-20210915192401-22848
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210915192401-22848-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210915192401-22848-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr
multinode_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr: exit status 7 (8.2067902s)

                                                
                                                
-- stdout --
	multinode-20210915192401-22848
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210915192401-22848-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210915192401-22848-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 19:34:38.323112   47300 out.go:298] Setting OutFile to fd 2576 ...
	I0915 19:34:38.324795   47300 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:34:38.324795   47300 out.go:311] Setting ErrFile to fd 2580...
	I0915 19:34:38.324795   47300 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:34:38.343305   47300 out.go:305] Setting JSON to false
	I0915 19:34:38.343305   47300 mustload.go:65] Loading cluster: multinode-20210915192401-22848
	I0915 19:34:38.344885   47300 config.go:177] Loaded profile config "multinode-20210915192401-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:34:38.344885   47300 status.go:253] checking status of multinode-20210915192401-22848 ...
	I0915 19:34:38.369106   47300 cli_runner.go:115] Run: docker container inspect multinode-20210915192401-22848 --format={{.State.Status}}
	I0915 19:34:40.341068   47300 cli_runner.go:168] Completed: docker container inspect multinode-20210915192401-22848 --format={{.State.Status}}: (1.9719748s)
	I0915 19:34:40.341416   47300 status.go:328] multinode-20210915192401-22848 host status = "Running" (err=<nil>)
	I0915 19:34:40.341867   47300 host.go:66] Checking if "multinode-20210915192401-22848" exists ...
	I0915 19:34:40.359573   47300 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210915192401-22848
	I0915 19:34:40.976918   47300 host.go:66] Checking if "multinode-20210915192401-22848" exists ...
	I0915 19:34:40.999755   47300 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 19:34:41.009456   47300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210915192401-22848
	I0915 19:34:41.664214   47300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56242 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\multinode-20210915192401-22848\id_rsa Username:docker}
	I0915 19:34:41.916119   47300 ssh_runner.go:152] Run: systemctl --version
	I0915 19:34:41.965064   47300 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 19:34:42.037742   47300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210915192401-22848
	I0915 19:34:42.675861   47300 kubeconfig.go:93] found "multinode-20210915192401-22848" server: "https://127.0.0.1:56241"
	I0915 19:34:42.675861   47300 api_server.go:164] Checking apiserver status ...
	I0915 19:34:42.693169   47300 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 19:34:42.794696   47300 ssh_runner.go:152] Run: sudo egrep ^[0-9]+:freezer: /proc/2075/cgroup
	I0915 19:34:42.850082   47300 api_server.go:180] apiserver freezer: "7:freezer:/docker/20740819262c9930477aafd8695e268aad8f11d2dc9dbb279b4585bf77285640/kubepods/burstable/podebda77179c53c7e043a4d62f5fb2ff4b/ba1b102b2f2434f22652c31b4f3011172f0ab11f62ee8caba8298552eb4f5c47"
	I0915 19:34:42.873646   47300 ssh_runner.go:152] Run: sudo cat /sys/fs/cgroup/freezer/docker/20740819262c9930477aafd8695e268aad8f11d2dc9dbb279b4585bf77285640/kubepods/burstable/podebda77179c53c7e043a4d62f5fb2ff4b/ba1b102b2f2434f22652c31b4f3011172f0ab11f62ee8caba8298552eb4f5c47/freezer.state
	I0915 19:34:42.929890   47300 api_server.go:202] freezer state: "THAWED"
	I0915 19:34:42.929890   47300 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:56241/healthz ...
	I0915 19:34:42.966062   47300 api_server.go:265] https://127.0.0.1:56241/healthz returned 200:
	ok
	I0915 19:34:42.966276   47300 status.go:419] multinode-20210915192401-22848 apiserver status = Running (err=<nil>)
	I0915 19:34:42.966276   47300 status.go:255] multinode-20210915192401-22848 status: &{Name:multinode-20210915192401-22848 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 19:34:42.966417   47300 status.go:253] checking status of multinode-20210915192401-22848-m02 ...
	I0915 19:34:42.999038   47300 cli_runner.go:115] Run: docker container inspect multinode-20210915192401-22848-m02 --format={{.State.Status}}
	I0915 19:34:43.659931   47300 status.go:328] multinode-20210915192401-22848-m02 host status = "Running" (err=<nil>)
	I0915 19:34:43.659931   47300 host.go:66] Checking if "multinode-20210915192401-22848-m02" exists ...
	I0915 19:34:43.674208   47300 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210915192401-22848-m02
	I0915 19:34:44.364666   47300 host.go:66] Checking if "multinode-20210915192401-22848-m02" exists ...
	I0915 19:34:44.385812   47300 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 19:34:44.399554   47300 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210915192401-22848-m02
	I0915 19:34:45.064839   47300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56286 SSHKeyPath:C:\Users\jenkins\minikube-integration\.minikube\machines\multinode-20210915192401-22848-m02\id_rsa Username:docker}
	I0915 19:34:45.348114   47300 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
	I0915 19:34:45.412469   47300 status.go:255] multinode-20210915192401-22848-m02 status: &{Name:multinode-20210915192401-22848-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0915 19:34:45.412806   47300 status.go:253] checking status of multinode-20210915192401-22848-m03 ...
	I0915 19:34:45.439097   47300 cli_runner.go:115] Run: docker container inspect multinode-20210915192401-22848-m03 --format={{.State.Status}}
	I0915 19:34:46.083425   47300 status.go:328] multinode-20210915192401-22848-m03 host status = "Stopped" (err=<nil>)
	I0915 19:34:46.084025   47300 status.go:341] host is not running, skipping remaining checks
	I0915 19:34:46.084025   47300 status.go:255] multinode-20210915192401-22848-m03 status: &{Name:multinode-20210915192401-22848-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (23.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (121.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:226: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:236: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 node start m03 --alsologtostderr
multinode_test.go:236: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 node start m03 --alsologtostderr: (1m50.2605686s)
multinode_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status
multinode_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status: (9.8682399s)
multinode_test.go:257: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (121.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (270.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:265: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20210915192401-22848
multinode_test.go:272: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20210915192401-22848
multinode_test.go:272: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-20210915192401-22848: (38.0981251s)
multinode_test.go:277: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210915192401-22848 --wait=true -v=8 --alsologtostderr
E0915 19:37:35.684626   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:38:32.272382   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
multinode_test.go:277: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210915192401-22848 --wait=true -v=8 --alsologtostderr: (3m51.4282367s)
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20210915192401-22848
--- PASS: TestMultiNode/serial/RestartKeepsNodes (270.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (33.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 node delete m03
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 node delete m03: (24.9999088s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr: (7.5135168s)
multinode_test.go:396: (dbg) Run:  docker volume ls
multinode_test.go:406: (dbg) Run:  kubectl get nodes
multinode_test.go:414: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (33.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (39.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 stop
multinode_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 stop: (33.3326455s)
multinode_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status
multinode_test.go:302: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status: exit status 7 (3.0331124s)

                                                
                                                
-- stdout --
	multinode-20210915192401-22848
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210915192401-22848-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr
multinode_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr: exit status 7 (3.0081009s)

                                                
                                                
-- stdout --
	multinode-20210915192401-22848
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210915192401-22848-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 19:42:28.494485   49936 out.go:298] Setting OutFile to fd 2376 ...
	I0915 19:42:28.496491   49936 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:42:28.496491   49936 out.go:311] Setting ErrFile to fd 2648...
	I0915 19:42:28.496491   49936 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0915 19:42:28.514488   49936 out.go:305] Setting JSON to false
	I0915 19:42:28.514488   49936 mustload.go:65] Loading cluster: multinode-20210915192401-22848
	I0915 19:42:28.515489   49936 config.go:177] Loaded profile config "multinode-20210915192401-22848": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.1
	I0915 19:42:28.515489   49936 status.go:253] checking status of multinode-20210915192401-22848 ...
	I0915 19:42:28.537004   49936 cli_runner.go:115] Run: docker container inspect multinode-20210915192401-22848 --format={{.State.Status}}
	I0915 19:42:30.446563   49936 cli_runner.go:168] Completed: docker container inspect multinode-20210915192401-22848 --format={{.State.Status}}: (1.9090544s)
	I0915 19:42:30.446735   49936 status.go:328] multinode-20210915192401-22848 host status = "Stopped" (err=<nil>)
	I0915 19:42:30.446735   49936 status.go:341] host is not running, skipping remaining checks
	I0915 19:42:30.446735   49936 status.go:255] multinode-20210915192401-22848 status: &{Name:multinode-20210915192401-22848 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 19:42:30.446735   49936 status.go:253] checking status of multinode-20210915192401-22848-m02 ...
	I0915 19:42:30.470430   49936 cli_runner.go:115] Run: docker container inspect multinode-20210915192401-22848-m02 --format={{.State.Status}}
	I0915 19:42:31.049104   49936 status.go:328] multinode-20210915192401-22848-m02 host status = "Stopped" (err=<nil>)
	I0915 19:42:31.049233   49936 status.go:341] host is not running, skipping remaining checks
	I0915 19:42:31.049233   49936 status.go:255] multinode-20210915192401-22848-m02 status: &{Name:multinode-20210915192401-22848-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (39.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (163.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:326: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210915192401-22848 --wait=true -v=8 --alsologtostderr --driver=docker
E0915 19:42:35.681253   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:43:15.360344   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 19:43:32.270760   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
multinode_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210915192401-22848 --wait=true -v=8 --alsologtostderr --driver=docker: (2m34.285793s)
multinode_test.go:342: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr
multinode_test.go:342: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210915192401-22848 status --alsologtostderr: (7.610247s)
multinode_test.go:356: (dbg) Run:  kubectl get nodes
multinode_test.go:364: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (163.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (255.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:425: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20210915192401-22848
multinode_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210915192401-22848-m02 --driver=docker
multinode_test.go:434: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20210915192401-22848-m02 --driver=docker: exit status 14 (481.4211ms)

                                                
                                                
-- stdout --
	* [multinode-20210915192401-22848-m02] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210915192401-22848-m02' is duplicated with machine name 'multinode-20210915192401-22848-m02' in profile 'multinode-20210915192401-22848'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:442: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210915192401-22848-m03 --driver=docker
E0915 19:45:38.761894   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:47:35.683743   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:48:32.268845   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
multinode_test.go:442: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210915192401-22848-m03 --driver=docker: (3m47.2652937s)
multinode_test.go:449: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20210915192401-22848
multinode_test.go:449: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20210915192401-22848: exit status 80 (6.7491386s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210915192401-22848
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect multinode-20210915192401-22848 --format={{.State.Status}}" took an unusually long time: 2.1540042s
	* Restarting the docker service may improve performance.
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210915192401-22848-m03 already exists in multinode-20210915192401-22848-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                             │
	│    * If the above advice does not help, please let us know:                                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                               │
	│                                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                    │
	│    * Please also attach the following file to the GitHub issue:                                             │
	│    * - C:\Users\jenkins\AppData\Local\Temp\minikube_node_68dc163ecc1470275f97c1774d2d827d0925d552_52.log    │
	│                                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:454: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20210915192401-22848-m03
multinode_test.go:454: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20210915192401-22848-m03: (20.0756609s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (255.04s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian_sid/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian_latest/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_10/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_10/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian_10/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian_9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian_9/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian_9/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian_9/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_latest/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.10/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_20.04/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu_18.04/kvm2-driver (0.00s)

                                                
                                    
x
+
TestPreload (421.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20210915194958-22848 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
E0915 19:52:35.685694   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 19:53:32.269184   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20210915194958-22848 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (3m52.3864651s)
preload_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20210915194958-22848 -- docker pull busybox
preload_test.go:62: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20210915194958-22848 -- docker pull busybox: (7.3136674s)
preload_test.go:72: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20210915194958-22848 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
preload_test.go:72: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20210915194958-22848 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (2m39.353714s)
preload_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20210915194958-22848 -- docker images
preload_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20210915194958-22848 -- docker images: (4.937768s)
helpers_test.go:176: Cleaning up "test-preload-20210915194958-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20210915194958-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20210915194958-22848: (17.2057439s)
--- PASS: TestPreload (421.20s)

                                                
                                    
x
+
TestSkaffold (304.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:58: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\skaffold.exe472570295 version
skaffold_test.go:62: skaffold version: v1.31.0
skaffold_test.go:65: (dbg) Run:  out/minikube-windows-amd64.exe start -p skaffold-20210915200110-22848 --memory=2600 --driver=docker
E0915 20:02:18.758610   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 20:02:35.674125   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 20:03:32.263273   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
skaffold_test.go:65: (dbg) Done: out/minikube-windows-amd64.exe start -p skaffold-20210915200110-22848 --memory=2600 --driver=docker: (3m9.5829982s)
skaffold_test.go:85: copying out/minikube-windows-amd64.exe to C:\jenkins\workspace\Docker_Windows_integration\out\minikube.exe
skaffold_test.go:109: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\skaffold.exe472570295 run --minikube-profile skaffold-20210915200110-22848 --kube-context skaffold-20210915200110-22848 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:109: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\skaffold.exe472570295 run --minikube-profile skaffold-20210915200110-22848 --kube-context skaffold-20210915200110-22848 --status-check=true --port-forward=false --interactive=false: (1m25.2069602s)
skaffold_test.go:115: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-79c8bc96f6-m6xlg" [7b06365b-4054-446b-9d16-79c51fb9f582] Running
skaffold_test.go:115: (dbg) TestSkaffold: app=leeroy-app healthy within 5.0626765s
skaffold_test.go:118: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-58445c9944-5bjvj" [15fb189b-b124-4961-8266-e0d3b1b6283f] Running
skaffold_test.go:118: (dbg) TestSkaffold: app=leeroy-web healthy within 5.0357476s
helpers_test.go:176: Cleaning up "skaffold-20210915200110-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p skaffold-20210915200110-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p skaffold-20210915200110-22848: (17.9397553s)
--- PASS: TestSkaffold (304.55s)

                                                
                                    
x
+
TestInsufficientStorage (53.14s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20210915200614-22848 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:51: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20210915200614-22848 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (32.3269087s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c56576a-508a-4c36-9d3e-160d5897d896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20210915200614-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ec68625-5141-4349-8952-11bb17e8d6e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"693b7b09-fa1f-42c6-b4eb-f0525f3c7869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"b3ccaa61-2c60-413e-a51f-e75aa3290580","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12425"}}
	{"specversion":"1.0","id":"54a2393f-bd55-48f7-953b-2d8afb35421e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f8e1c1cd-4647-4c51-a22f-ee7a3ac0054d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ffbb450a-6827-46f9-97f4-3b8388a21596","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210915200614-22848 in cluster insufficient-storage-20210915200614-22848","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"00543775-f831-4319-9b0c-e353930aaf07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4db58b4d-a866-4a1a-97f4-4759b91a79bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"49dbb907-2aa2-465c-82ad-44526c756ddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20210915200614-22848 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20210915200614-22848 --output=json --layout=cluster: exit status 7 (4.5418821s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210915200614-22848","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.23.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210915200614-22848","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 20:06:51.678694   63040 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210915200614-22848" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20210915200614-22848 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20210915200614-22848 --output=json --layout=cluster: exit status 7 (4.4910033s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210915200614-22848","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.23.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210915200614-22848","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 20:06:56.150401    9932 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210915200614-22848" does not appear in C:\Users\jenkins\minikube-integration\kubeconfig
	E0915 20:06:56.225222    9932 status.go:557] unable to read event log: stat: CreateFile C:\Users\jenkins\minikube-integration\.minikube\profiles\insufficient-storage-20210915200614-22848\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210915200614-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20210915200614-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20210915200614-22848: (11.7685363s)
--- PASS: TestInsufficientStorage (53.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (981.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.3807417118.exe start -p running-upgrade-20210915200708-22848 --memory=2200 --vm-driver=docker
E0915 20:07:35.673037   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 20:08:32.260736   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 20:10:46.767425   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:46.773785   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:46.785569   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:46.806360   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:46.846901   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:46.928008   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:47.088862   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:47.410054   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:48.051761   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:49.333489   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:51.895063   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:10:57.017423   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:11:07.259975   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:11:27.743096   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:12:08.705220   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:12:35.671258   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 20:13:30.630463   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:13:32.264489   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 20:15:46.764465   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.3807417118.exe start -p running-upgrade-20210915200708-22848 --memory=2200 --vm-driver=docker: (12m4.9083466s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20210915200708-22848 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-20210915200708-22848 --memory=2200 --alsologtostderr -v=1 --driver=docker: (3m47.2464952s)
helpers_test.go:176: Cleaning up "running-upgrade-20210915200708-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20210915200708-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20210915200708-22848: (28.1825556s)
--- PASS: TestRunningBinaryUpgrade (981.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (1187.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915203315-22848 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915203315-22848 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker: (12m16.1734283s)
version_upgrade_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20210915203315-22848
E0915 20:45:46.746754   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
version_upgrade_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20210915203315-22848: (33.9337392s)
version_upgrade_test.go:236: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20210915203315-22848 status --format={{.Host}}

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:236: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20210915203315-22848 status --format={{.Host}}: exit status 7 (2.7380783s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect kubernetes-upgrade-20210915203315-22848 --format={{.State.Status}}" took an unusually long time: 2.1286094s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
version_upgrade_test.go:238: status error: exit status 7 (may be ok)
version_upgrade_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915203315-22848 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915203315-22848 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker: (5m5.4411115s)
version_upgrade_test.go:252: (dbg) Run:  kubectl --context kubernetes-upgrade-20210915203315-22848 version --output=json
version_upgrade_test.go:271: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:273: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915203315-22848 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker
version_upgrade_test.go:273: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915203315-22848 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker: exit status 106 (571.6342ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210915203315-22848] minikube v1.23.0 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - KUBECONFIG=C:\Users\jenkins\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=12425
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.2-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210915203315-22848
	    minikube start -p kubernetes-upgrade-20210915203315-22848 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210915203315-228482 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.2-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210915203315-22848 --kubernetes-version=v1.22.2-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:277: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:279: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915203315-22848 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker
E0915 20:52:18.751250   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
version_upgrade_test.go:279: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210915203315-22848 --memory=2200 --kubernetes-version=v1.22.2-rc.0 --alsologtostderr -v=1 --driver=docker: (1m10.2822639s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210915203315-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20210915203315-22848
E0915 20:52:35.653931   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20210915203315-22848: (37.7216852s)
--- PASS: TestKubernetesUpgrade (1187.26s)

                                                
                                    
x
+
TestMissingContainerUpgrade (756.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:313: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.1.3571780711.exe start -p missing-upgrade-20210915202421-22848 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:313: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.1.3571780711.exe start -p missing-upgrade-20210915202421-22848 --memory=2200 --driver=docker: (8m1.5530595s)
version_upgrade_test.go:322: (dbg) Run:  docker stop missing-upgrade-20210915202421-22848

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Done: docker stop missing-upgrade-20210915202421-22848: (13.5196419s)
version_upgrade_test.go:327: (dbg) Run:  docker rm missing-upgrade-20210915202421-22848
version_upgrade_test.go:333: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20210915202421-22848 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:333: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-20210915202421-22848 --memory=2200 --alsologtostderr -v=1 --driver=docker: (3m53.3985642s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210915202421-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20210915202421-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20210915202421-22848: (26.0405715s)
--- PASS: TestMissingContainerUpgrade (756.23s)

                                                
                                    
x
+
TestPause/serial/Start (531.71s)

                                                
                                                
=== RUN   TestPause/serial/Start

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20210915200708-22848 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20210915200708-22848 --memory=2048 --install-addons=false --wait=all --driver=docker: (8m51.7107861s)
--- PASS: TestPause/serial/Start (531.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (981.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:187: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.3547081578.exe start -p stopped-upgrade-20210915200708-22848 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:187: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.3547081578.exe start -p stopped-upgrade-20210915200708-22848 --memory=2200 --vm-driver=docker: (12m25.152885s)
version_upgrade_test.go:196: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.3547081578.exe -p stopped-upgrade-20210915200708-22848 stop
version_upgrade_test.go:196: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.3547081578.exe -p stopped-upgrade-20210915200708-22848 stop: (31.0810266s)
version_upgrade_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-20210915200708-22848 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0915 20:20:46.758456   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:202: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-20210915200708-22848 --memory=2200 --alsologtostderr -v=1 --driver=docker: (3m24.8719506s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (981.11s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (93.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20210915200708-22848 --alsologtostderr -v=1 --driver=docker
E0915 20:16:14.470991   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
E0915 20:16:35.356221   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
pause_test.go:90: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20210915200708-22848 --alsologtostderr -v=1 --driver=docker: (1m33.4148186s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (93.45s)

                                                
                                    
x
+
TestPause/serial/Pause (9.97s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20210915200708-22848 --alsologtostderr -v=5
E0915 20:17:35.669284   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
pause_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20210915200708-22848 --alsologtostderr -v=5: (9.9683914s)
--- PASS: TestPause/serial/Pause (9.97s)

                                                
                                    
x
+
TestPause/serial/Unpause (9.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:119: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-20210915200708-22848 --alsologtostderr -v=5
pause_test.go:119: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-20210915200708-22848 --alsologtostderr -v=5: (9.7961503s)
--- PASS: TestPause/serial/Unpause (9.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (12.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20210915200708-22848 --alsologtostderr -v=5
E0915 20:18:32.256820   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20210915200708-22848 --alsologtostderr -v=5: (12.8399076s)
--- PASS: TestPause/serial/PauseAgain (12.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (16.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:210: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20210915200708-22848
E0915 20:23:32.258944   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:210: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20210915200708-22848: (16.3959684s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (16.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (907.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20210915203352-22848 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20210915203352-22848 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0: (15m7.271926s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (907.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (418.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20210915203420-22848 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.2-rc.0
E0915 20:35:38.755244   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 20:35:46.750314   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20210915203420-22848 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.2-rc.0: (6m58.8052192s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (418.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (496.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20210915203657-22848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.1
E0915 20:37:35.661797   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 20:38:32.249744   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 20:40:46.749022   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20210915203657-22848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.1: (8m16.7878568s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (496.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (33.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20210915203420-22848 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context no-preload-20210915203420-22848 create -f testdata\busybox.yaml: (2.0174491s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [6437180a-099c-4680-a933-b5f1ddc2ce24] Pending
helpers_test.go:343: "busybox" [6437180a-099c-4680-a933-b5f1ddc2ce24] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [6437180a-099c-4680-a933-b5f1ddc2ce24] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 29.2675732s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20210915203420-22848 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:181: (dbg) Done: kubectl --context no-preload-20210915203420-22848 exec busybox -- /bin/sh -c "ulimit -n": (1.838912s)
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (33.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (15.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20210915203420-22848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20210915203420-22848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (14.4444055s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20210915203420-22848 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Done: kubectl --context no-preload-20210915203420-22848 describe deploy/metrics-server -n kube-system: (1.1019994s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (15.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (34.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20210915203420-22848 --alsologtostderr -v=3
E0915 20:42:35.658727   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-20210915203420-22848 --alsologtostderr -v=3: (34.0042282s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (34.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848: exit status 7 (3.0841068s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect no-preload-20210915203420-22848 --format={{.State.Status}}" took an unusually long time: 2.5209376s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20210915203420-22848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20210915203420-22848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0835521s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (6.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (883.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20210915203420-22848 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.2-rc.0
E0915 20:43:32.247117   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 20:43:49.826668   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20210915203420-22848 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.22.2-rc.0: (14m35.9939901s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848: (7.3140999s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (883.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (60.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20210915203657-22848 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context embed-certs-20210915203657-22848 create -f testdata\busybox.yaml: (2.1922037s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [08158298-a51c-4a10-a428-806fddb1a9d3] Pending
helpers_test.go:343: "busybox" [08158298-a51c-4a10-a428-806fddb1a9d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:343: "busybox" [08158298-a51c-4a10-a428-806fddb1a9d3] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 56.2564315s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20210915203657-22848 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:181: (dbg) Done: kubectl --context embed-certs-20210915203657-22848 exec busybox -- /bin/sh -c "ulimit -n": (1.858935s)
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (60.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (15.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20210915203657-22848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20210915203657-22848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (14.9385752s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20210915203657-22848 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (15.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (36.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20210915203657-22848 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-20210915203657-22848 --alsologtostderr -v=3: (36.5259223s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (36.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848: exit status 7 (3.219835s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect embed-certs-20210915203657-22848 --format={{.State.Status}}" took an unusually long time: 2.5515149s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20210915203657-22848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20210915203657-22848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0792844s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (6.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (976.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20210915203657-22848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.1
E0915 20:47:35.656929   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 20:48:32.246158   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20210915203657-22848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.22.1: (16m2.6615483s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848: (13.8965185s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (976.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (29.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20210915203352-22848 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context old-k8s-version-20210915203352-22848 create -f testdata\busybox.yaml: (1.7586306s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [5b21b7a6-1666-11ec-901b-02421148780c] Pending
helpers_test.go:343: "busybox" [5b21b7a6-1666-11ec-901b-02421148780c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [5b21b7a6-1666-11ec-901b-02421148780c] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 25.314729s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20210915203352-22848 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:181: (dbg) Done: kubectl --context old-k8s-version-20210915203352-22848 exec busybox -- /bin/sh -c "ulimit -n": (2.2103306s)
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (29.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (13.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20210915203352-22848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20210915203352-22848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (12.5155973s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20210915203352-22848 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (13.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (31.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20210915203352-22848 --alsologtostderr -v=3
E0915 20:49:55.348057   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-20210915203352-22848 --alsologtostderr -v=3: (31.0503354s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (31.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (5.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210915203352-22848 -n old-k8s-version-20210915203352-22848
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210915203352-22848 -n old-k8s-version-20210915203352-22848: exit status 7 (2.7016212s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect old-k8s-version-20210915203352-22848 --format={{.State.Status}}" took an unusually long time: 2.1680829s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20210915203352-22848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20210915203352-22848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.7481077s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (5.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (535.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210915205315-22848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.1
E0915 20:53:32.243111   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 20:55:46.745370   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210915205315-22848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.1: (8m55.0255605s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (535.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (113.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-t9kgq" [76c38100-22ea-41d1-920b-dee902b48231] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0915 20:57:35.653006   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 20:58:32.240858   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-t9kgq" [76c38100-22ea-41d1-920b-dee902b48231] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 1m53.1771501s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (113.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-t9kgq" [76c38100-22ea-41d1-920b-dee902b48231] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.2417939s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20210915203420-22848 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context no-preload-20210915203420-22848 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.2337184s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20210915203420-22848 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-20210915203420-22848 "sudo crictl images -o json": (7.7760757s)
start_stop_delete_test.go:289: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (56.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20210915203420-22848 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-20210915203420-22848 --alsologtostderr -v=1: (15.0308774s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848: exit status 2 (6.2693494s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect no-preload-20210915203420-22848 --format={{.State.Status}}" took an unusually long time: 2.4834237s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848: exit status 2 (6.1006821s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect no-preload-20210915203420-22848 --format={{.State.Status}}" took an unusually long time: 2.1514979s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-20210915203420-22848 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-20210915203420-22848 --alsologtostderr -v=1: (12.8878894s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848: (7.9778105s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848
E0915 21:00:29.821361   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\skaffold-20210915200110-22848\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20210915203420-22848 -n no-preload-20210915203420-22848: (8.0122112s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (56.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (342.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20210915210129-22848 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.2-rc.0
E0915 21:01:31.737880   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915203420-22848\client.crt: The system cannot find the path specified.
E0915 21:01:41.978415   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915203420-22848\client.crt: The system cannot find the path specified.
E0915 21:02:02.459505   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915203420-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20210915210129-22848 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.2-rc.0: (5m42.1962265s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (342.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (80.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20210915205315-22848 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context default-k8s-different-port-20210915205315-22848 create -f testdata\busybox.yaml: (2.4559329s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [be3233a6-fab2-4dd4-98b6-53e5b4f6ee66] Pending
helpers_test.go:343: "busybox" [be3233a6-fab2-4dd4-98b6-53e5b4f6ee66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0915 21:02:35.656934   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 21:02:43.425025   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915203420-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:343: "busybox" [be3233a6-fab2-4dd4-98b6-53e5b4f6ee66] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 1m9.1515232s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20210915205315-22848 exec busybox -- /bin/sh -c "ulimit -n"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Done: kubectl --context default-k8s-different-port-20210915205315-22848 exec busybox -- /bin/sh -c "ulimit -n": (8.6092236s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (80.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-n7z6v" [9137d804-be37-4283-857c-3148296de880] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.4399273s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (38.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20210915205315-22848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0915 21:03:32.238848   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20210915205315-22848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (35.8014377s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20210915205315-22848 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Done: kubectl --context default-k8s-different-port-20210915205315-22848 describe deploy/metrics-server -n kube-system: (2.7959454s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (38.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (7.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-n7z6v" [9137d804-be37-4283-857c-3148296de880] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.2156598s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210915203657-22848 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context embed-certs-20210915203657-22848 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.6630094s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (7.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (16.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20210915203657-22848 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-20210915203657-22848 "sudo crictl images -o json": (16.2069627s)
start_stop_delete_test.go:289: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (16.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (80.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20210915203657-22848 --alsologtostderr -v=1
E0915 21:04:05.347166   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\no-preload-20210915203420-22848\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-20210915203657-22848 --alsologtostderr -v=1: (38.5934902s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848: exit status 2 (6.3779669s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect embed-certs-20210915203657-22848 --format={{.State.Status}}" took an unusually long time: 2.6094082s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848: exit status 2 (5.9585426s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect embed-certs-20210915203657-22848 --format={{.State.Status}}" took an unusually long time: 2.1191424s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-20210915203657-22848 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-20210915203657-22848 --alsologtostderr -v=1: (10.4088622s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848: (10.3600463s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20210915203657-22848 -n embed-certs-20210915203657-22848: (8.6305183s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (80.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (40.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20210915205315-22848 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20210915205315-22848 --alsologtostderr -v=3: (40.3716945s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (40.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (6.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848: exit status 7 (3.0544068s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker container inspect default-k8s-different-port-20210915205315-22848 --format={{.State.Status}}" took an unusually long time: 2.4000198s
	* Restarting the docker service may improve performance.

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20210915205315-22848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20210915205315-22848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.1778783s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (6.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (513.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210915205315-22848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210915205315-22848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.22.1: (8m26.8125288s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848: (6.2350077s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (513.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20210915210129-22848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20210915210129-22848 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (7.275299s)
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (7.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20210915210129-22848 --alsologtostderr -v=3
E0915 21:07:35.649126   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-20210915210129-22848 --alsologtostderr -v=3: (20.1678083s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (4.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848: exit status 7 (2.5504197s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20210915210129-22848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20210915210129-22848 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.42712s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (4.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (96.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20210915210129-22848 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.2-rc.0
E0915 21:08:32.240179   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
E0915 21:08:58.748077   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\functional-20210915185528-22848\client.crt: The system cannot find the path specified.
E0915 21:09:01.417163   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:01.426860   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:01.447181   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:01.468430   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:01.510538   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:01.592081   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:01.753240   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:02.073415   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:02.719890   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:04.001328   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:06.562874   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
E0915 21:09:11.684298   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20210915210129-22848 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.22.2-rc.0: (1m30.2513652s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848: (6.0556544s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (96.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20210915210129-22848 "sudo crictl images -o json"
E0915 21:09:21.925046   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-20210915210129-22848 "sudo crictl images -o json": (6.1119243s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (6.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (36.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20210915210129-22848 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-20210915210129-22848 --alsologtostderr -v=1: (7.5878715s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848: exit status 2 (5.0288182s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848
E0915 21:09:42.405602   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848: exit status 2 (5.0725392s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-20210915210129-22848 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-20210915210129-22848 --alsologtostderr -v=1: (5.8196965s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848: (5.7273908s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20210915210129-22848 -n newest-cni-20210915210129-22848: (6.8699817s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (36.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bbk9p" [5b55844c-2725-45c6-b798-3fcc70153451] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bbk9p" [5b55844c-2725-45c6-b798-3fcc70153451] Running
E0915 21:13:32.235972   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\addons-20210915183056-22848\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.1266195s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bbk9p" [5b55844c-2725-45c6-b798-3fcc70153451] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0387381s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20210915205315-22848 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (5.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20210915205315-22848 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20210915205315-22848 "sudo crictl images -o json": (5.4473123s)
start_stop_delete_test.go:289: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (5.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (35.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20210915205315-22848 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20210915205315-22848 --alsologtostderr -v=1: (7.7457405s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848: exit status 2 (4.8253265s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848
E0915 21:14:01.413790   22848 cert_rotation.go:168] key failed with : open C:\Users\jenkins\minikube-integration\.minikube\profiles\old-k8s-version-20210915203352-22848\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848: exit status 2 (4.7805073s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20210915205315-22848 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20210915205315-22848 --alsologtostderr -v=1: (5.6361772s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848: (5.2918666s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20210915205315-22848 -n default-k8s-different-port-20210915205315-22848: (6.7549461s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (35.04s)

                                                
                                    

Test skip (22/232)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:120: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/cached-images
aaa_download_only_test.go:120: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/cached-images
aaa_download_only_test.go:120: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.2-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.2-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.2-rc.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.2-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (47.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:253: registry stabilized in 124.9461ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:255: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-cbsl2" [fc214be3-a1d4-438e-a310-50582511ee00] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:255: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0865117s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:258: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-dsnh5" [ef6b9aa4-354c-4b73-97b9-a9129fc121ad] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:258: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0798163s
addons_test.go:263: (dbg) Run:  kubectl --context addons-20210915183056-22848 delete po -l run=registry-test --now
addons_test.go:268: (dbg) Run:  kubectl --context addons-20210915183056-22848 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:268: (dbg) Done: kubectl --context addons-20210915183056-22848 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (36.5205979s)
addons_test.go:278: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (47.46s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:42: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:977: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20210915185528-22848 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:988: output didn't produce a URL
functional_test.go:982: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20210915185528-22848 --alsologtostderr -v=1] ...
helpers_test.go:489: unable to find parent, assuming dead: process does not exist
--- SKIP: TestFunctional/parallel/DashboardCmd (300.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:59: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (55.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1477: (dbg) Run:  kubectl --context functional-20210915185528-22848 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1483: (dbg) Run:  kubectl --context functional-20210915185528-22848 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1488: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-6zxnx" [5db4631a-6341-4ed5-b14f-a19ecbbf28a8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-6zxnx" [5db4631a-6341-4ed5-b14f-a19ecbbf28a8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1488: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 48.1518966s
functional_test.go:1492: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210915185528-22848 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1492: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210915185528-22848 service list: (6.4436084s)
functional_test.go:1501: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (55.68s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:647: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:78: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210915205302-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20210915205302-22848
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20210915205302-22848: (12.2770349s)
--- SKIP: TestStartStop/group/disable-driver-mounts (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (9.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210915202329-22848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20210915202329-22848

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20210915202329-22848: (9.0487077s)
--- SKIP: TestNetworkPlugins/group/flannel (9.05s)

                                                
                                    
Copied to clipboard